00:00:00.000 Started by upstream project "autotest-per-patch" build number 127179 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "jbp-per-patch" build number 24320 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.097 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.098 The recommended git tool is: git 00:00:00.098 using credential 00000000-0000-0000-0000-000000000002 00:00:00.101 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.134 Fetching changes from the remote Git repository 00:00:00.135 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.169 Using shallow fetch with depth 1 00:00:00.169 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.169 > git --version # timeout=10 00:00:00.200 > git --version # 'git version 2.39.2' 00:00:00.200 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.225 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.225 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/32/24332/3 # timeout=5 00:00:06.656 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.666 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.677 Checking out Revision 42e00731b22fe9a8063e4b475dece9d4b345521a (FETCH_HEAD) 00:00:06.677 > git config core.sparsecheckout # timeout=10 00:00:06.686 > git read-tree -mu HEAD # timeout=10 00:00:06.701 > git checkout -f 42e00731b22fe9a8063e4b475dece9d4b345521a # timeout=5 00:00:06.720 Commit message: "jjb/autotest: add SPDK_TEST_RAID flag for docker-autotest jobs" 00:00:06.720 > git rev-list --no-walk 8a3af85d3e939d61c9d7d5b7d8ed38da3ea5ca0b # timeout=10 00:00:06.812 [Pipeline] Start of Pipeline 00:00:06.823 [Pipeline] library 00:00:06.825 Loading library shm_lib@master 00:00:06.825 Library shm_lib@master is cached. Copying from home. 00:00:06.839 [Pipeline] node 00:00:06.848 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:06.849 [Pipeline] { 00:00:06.857 [Pipeline] catchError 00:00:06.858 [Pipeline] { 00:00:06.868 [Pipeline] wrap 00:00:06.874 [Pipeline] { 00:00:06.882 [Pipeline] stage 00:00:06.884 [Pipeline] { (Prologue) 00:00:06.902 [Pipeline] echo 00:00:06.903 Node: VM-host-SM17 00:00:06.909 [Pipeline] cleanWs 00:00:06.918 [WS-CLEANUP] Deleting project workspace... 00:00:06.918 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.923 [WS-CLEANUP] done 00:00:07.115 [Pipeline] setCustomBuildProperty 00:00:07.176 [Pipeline] httpRequest 00:00:07.203 [Pipeline] echo 00:00:07.204 Sorcerer 10.211.164.101 is alive 00:00:07.210 [Pipeline] httpRequest 00:00:07.221 HttpMethod: GET 00:00:07.221 URL: http://10.211.164.101/packages/jbp_42e00731b22fe9a8063e4b475dece9d4b345521a.tar.gz 00:00:07.221 Sending request to url: http://10.211.164.101/packages/jbp_42e00731b22fe9a8063e4b475dece9d4b345521a.tar.gz 00:00:07.247 Response Code: HTTP/1.1 200 OK 00:00:07.247 Success: Status code 200 is in the accepted range: 200,404 00:00:07.248 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_42e00731b22fe9a8063e4b475dece9d4b345521a.tar.gz 00:00:33.440 [Pipeline] sh 00:00:33.715 + tar --no-same-owner -xf jbp_42e00731b22fe9a8063e4b475dece9d4b345521a.tar.gz 00:00:33.729 [Pipeline] httpRequest 00:00:33.751 [Pipeline] echo 00:00:33.753 Sorcerer 10.211.164.101 is alive 00:00:33.760 [Pipeline] httpRequest 00:00:33.764 HttpMethod: GET 00:00:33.764 URL: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:33.765 Sending request to url: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:33.766 Response Code: HTTP/1.1 200 OK 00:00:33.766 Success: Status code 200 is in the accepted range: 200,404 00:00:33.766 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:51.727 [Pipeline] sh 00:00:52.008 + tar --no-same-owner -xf spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:55.314 [Pipeline] sh 00:00:55.593 + git -C spdk log --oneline -n5 00:00:55.594 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:00:55.594 fc2398dfa raid: clear base bdev configure_cb after executing 00:00:55.594 5558f3f50 raid: complete bdev_raid_create after sb is written 00:00:55.594 d005e023b raid: fix empty slot not updated in sb after resize 00:00:55.594 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:00:55.613 [Pipeline] writeFile 00:00:55.630 [Pipeline] sh 00:00:55.909 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:55.920 [Pipeline] sh 00:00:56.198 + cat autorun-spdk.conf 00:00:56.198 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:56.198 SPDK_TEST_NVMF=1 00:00:56.198 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:56.198 SPDK_TEST_URING=1 00:00:56.198 SPDK_TEST_USDT=1 00:00:56.198 SPDK_RUN_UBSAN=1 00:00:56.198 NET_TYPE=virt 00:00:56.198 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:56.204 RUN_NIGHTLY=0 00:00:56.207 [Pipeline] } 00:00:56.224 [Pipeline] // stage 00:00:56.237 [Pipeline] stage 00:00:56.239 [Pipeline] { (Run VM) 00:00:56.252 [Pipeline] sh 00:00:56.532 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:56.532 + echo 'Start stage prepare_nvme.sh' 00:00:56.532 Start stage prepare_nvme.sh 00:00:56.532 + [[ -n 7 ]] 00:00:56.532 + disk_prefix=ex7 00:00:56.532 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:00:56.532 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:00:56.532 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:00:56.532 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:56.532 ++ SPDK_TEST_NVMF=1 00:00:56.532 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:56.532 ++ SPDK_TEST_URING=1 00:00:56.532 ++ SPDK_TEST_USDT=1 00:00:56.532 ++ SPDK_RUN_UBSAN=1 00:00:56.532 ++ NET_TYPE=virt 00:00:56.532 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:56.532 ++ RUN_NIGHTLY=0 00:00:56.532 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:56.532 + nvme_files=() 00:00:56.532 + declare -A nvme_files 00:00:56.532 + backend_dir=/var/lib/libvirt/images/backends 00:00:56.532 + nvme_files['nvme.img']=5G 00:00:56.532 + nvme_files['nvme-cmb.img']=5G 00:00:56.532 + nvme_files['nvme-multi0.img']=4G 00:00:56.532 + nvme_files['nvme-multi1.img']=4G 00:00:56.532 + nvme_files['nvme-multi2.img']=4G 00:00:56.532 + nvme_files['nvme-openstack.img']=8G 00:00:56.532 + nvme_files['nvme-zns.img']=5G 00:00:56.532 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:56.532 + (( SPDK_TEST_FTL == 1 )) 00:00:56.532 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:56.532 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:56.532 + for nvme in "${!nvme_files[@]}" 00:00:56.532 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:00:56.532 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:56.532 + for nvme in "${!nvme_files[@]}" 00:00:56.532 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:00:56.532 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:56.532 + for nvme in "${!nvme_files[@]}" 00:00:56.532 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:00:56.532 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:56.532 + for nvme in "${!nvme_files[@]}" 00:00:56.532 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:00:56.532 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:56.532 + for nvme in "${!nvme_files[@]}" 00:00:56.532 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:00:56.532 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:56.532 + for nvme in "${!nvme_files[@]}" 00:00:56.532 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:00:56.532 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:56.532 + for nvme in "${!nvme_files[@]}" 00:00:56.532 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:00:57.099 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:57.099 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:00:57.099 + echo 'End stage prepare_nvme.sh' 00:00:57.099 End stage prepare_nvme.sh 00:00:57.112 [Pipeline] sh 00:00:57.454 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:57.454 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora38 00:00:57.454 00:00:57.454 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:00:57.454 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:00:57.454 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:57.454 HELP=0 00:00:57.454 DRY_RUN=0 00:00:57.454 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:00:57.454 NVME_DISKS_TYPE=nvme,nvme, 00:00:57.454 NVME_AUTO_CREATE=0 00:00:57.454 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:00:57.454 NVME_CMB=,, 00:00:57.454 NVME_PMR=,, 00:00:57.454 NVME_ZNS=,, 00:00:57.454 NVME_MS=,, 00:00:57.454 NVME_FDP=,, 00:00:57.454 SPDK_VAGRANT_DISTRO=fedora38 00:00:57.454 SPDK_VAGRANT_VMCPU=10 00:00:57.454 SPDK_VAGRANT_VMRAM=12288 00:00:57.454 SPDK_VAGRANT_PROVIDER=libvirt 00:00:57.454 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:57.454 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:57.454 SPDK_OPENSTACK_NETWORK=0 00:00:57.454 VAGRANT_PACKAGE_BOX=0 00:00:57.454 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:57.454 FORCE_DISTRO=true 00:00:57.454 VAGRANT_BOX_VERSION= 00:00:57.454 EXTRA_VAGRANTFILES= 00:00:57.454 NIC_MODEL=e1000 00:00:57.454 00:00:57.455 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:00:57.455 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:00.735 Bringing machine 'default' up with 'libvirt' provider... 00:01:01.301 ==> default: Creating image (snapshot of base box volume). 00:01:01.558 ==> default: Creating domain with the following settings... 00:01:01.558 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721912213_7d5388fb4db85736c1c7 00:01:01.558 ==> default: -- Domain type: kvm 00:01:01.558 ==> default: -- Cpus: 10 00:01:01.558 ==> default: -- Feature: acpi 00:01:01.558 ==> default: -- Feature: apic 00:01:01.558 ==> default: -- Feature: pae 00:01:01.558 ==> default: -- Memory: 12288M 00:01:01.558 ==> default: -- Memory Backing: hugepages: 00:01:01.558 ==> default: -- Management MAC: 00:01:01.558 ==> default: -- Loader: 00:01:01.558 ==> default: -- Nvram: 00:01:01.558 ==> default: -- Base box: spdk/fedora38 00:01:01.558 ==> default: -- Storage pool: default 00:01:01.558 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721912213_7d5388fb4db85736c1c7.img (20G) 00:01:01.558 ==> default: -- Volume Cache: default 00:01:01.558 ==> default: -- Kernel: 00:01:01.558 ==> default: -- Initrd: 00:01:01.558 ==> default: -- Graphics Type: vnc 00:01:01.558 ==> default: -- Graphics Port: -1 00:01:01.558 ==> default: -- Graphics IP: 127.0.0.1 00:01:01.558 ==> default: -- Graphics Password: Not defined 00:01:01.558 ==> default: -- Video Type: cirrus 00:01:01.558 ==> default: -- Video VRAM: 9216 00:01:01.558 ==> default: -- Sound Type: 00:01:01.558 ==> default: -- Keymap: en-us 00:01:01.559 ==> default: -- TPM Path: 00:01:01.559 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:01.559 ==> default: -- Command line args: 00:01:01.559 ==> default: -> value=-device, 00:01:01.559 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:01.559 ==> default: -> value=-drive, 00:01:01.559 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:01:01.559 ==> default: -> value=-device, 00:01:01.559 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:01.559 ==> default: -> value=-device, 00:01:01.559 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:01.559 ==> default: -> value=-drive, 00:01:01.559 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:01.559 ==> default: -> value=-device, 00:01:01.559 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:01.559 ==> default: -> value=-drive, 00:01:01.559 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:01.559 ==> default: -> value=-device, 00:01:01.559 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:01.559 ==> default: -> value=-drive, 00:01:01.559 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:01.559 ==> default: -> value=-device, 00:01:01.559 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:01.817 ==> default: Creating shared folders metadata... 00:01:01.817 ==> default: Starting domain. 00:01:03.718 ==> default: Waiting for domain to get an IP address... 00:01:21.796 ==> default: Waiting for SSH to become available... 00:01:21.796 ==> default: Configuring and enabling network interfaces... 00:01:24.339 default: SSH address: 192.168.121.200:22 00:01:24.339 default: SSH username: vagrant 00:01:24.339 default: SSH auth method: private key 00:01:26.253 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:34.384 ==> default: Mounting SSHFS shared folder... 00:01:34.949 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:34.950 ==> default: Checking Mount.. 00:01:36.336 ==> default: Folder Successfully Mounted! 00:01:36.336 ==> default: Running provisioner: file... 00:01:37.268 default: ~/.gitconfig => .gitconfig 00:01:37.525 00:01:37.525 SUCCESS! 00:01:37.525 00:01:37.525 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:37.525 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:37.525 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:37.525 00:01:37.533 [Pipeline] } 00:01:37.551 [Pipeline] // stage 00:01:37.560 [Pipeline] dir 00:01:37.561 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:01:37.563 [Pipeline] { 00:01:37.575 [Pipeline] catchError 00:01:37.576 [Pipeline] { 00:01:37.588 [Pipeline] sh 00:01:37.934 + vagrant ssh-config --host vagrant 00:01:37.935 + sed -ne /^Host/,$p 00:01:37.935 + tee ssh_conf 00:01:42.134 Host vagrant 00:01:42.134 HostName 192.168.121.200 00:01:42.134 User vagrant 00:01:42.134 Port 22 00:01:42.134 UserKnownHostsFile /dev/null 00:01:42.134 StrictHostKeyChecking no 00:01:42.134 PasswordAuthentication no 00:01:42.134 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:42.134 IdentitiesOnly yes 00:01:42.134 LogLevel FATAL 00:01:42.134 ForwardAgent yes 00:01:42.134 ForwardX11 yes 00:01:42.134 00:01:42.147 [Pipeline] withEnv 00:01:42.149 [Pipeline] { 00:01:42.161 [Pipeline] sh 00:01:42.440 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:42.440 source /etc/os-release 00:01:42.440 [[ -e /image.version ]] && img=$(< /image.version) 00:01:42.440 # Minimal, systemd-like check. 00:01:42.440 if [[ -e /.dockerenv ]]; then 00:01:42.440 # Clear garbage from the node's name: 00:01:42.440 # agt-er_autotest_547-896 -> autotest_547-896 00:01:42.440 # $HOSTNAME is the actual container id 00:01:42.440 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:42.440 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:42.440 # We can assume this is a mount from a host where container is running, 00:01:42.440 # so fetch its hostname to easily identify the target swarm worker. 00:01:42.440 container="$(< /etc/hostname) ($agent)" 00:01:42.440 else 00:01:42.440 # Fallback 00:01:42.440 container=$agent 00:01:42.440 fi 00:01:42.440 fi 00:01:42.440 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:42.440 00:01:42.709 [Pipeline] } 00:01:42.729 [Pipeline] // withEnv 00:01:42.737 [Pipeline] setCustomBuildProperty 00:01:42.753 [Pipeline] stage 00:01:42.756 [Pipeline] { (Tests) 00:01:42.774 [Pipeline] sh 00:01:43.053 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:43.067 [Pipeline] sh 00:01:43.359 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:43.374 [Pipeline] timeout 00:01:43.374 Timeout set to expire in 30 min 00:01:43.376 [Pipeline] { 00:01:43.392 [Pipeline] sh 00:01:43.670 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:44.236 HEAD is now at 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:01:44.248 [Pipeline] sh 00:01:44.527 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:44.797 [Pipeline] sh 00:01:45.075 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:45.346 [Pipeline] sh 00:01:45.623 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:45.623 ++ readlink -f spdk_repo 00:01:45.881 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:45.881 + [[ -n /home/vagrant/spdk_repo ]] 00:01:45.881 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:45.881 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:45.881 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:45.881 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:45.881 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:45.881 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:45.881 + cd /home/vagrant/spdk_repo 00:01:45.881 + source /etc/os-release 00:01:45.881 ++ NAME='Fedora Linux' 00:01:45.881 ++ VERSION='38 (Cloud Edition)' 00:01:45.881 ++ ID=fedora 00:01:45.881 ++ VERSION_ID=38 00:01:45.881 ++ VERSION_CODENAME= 00:01:45.881 ++ PLATFORM_ID=platform:f38 00:01:45.881 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:45.881 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:45.881 ++ LOGO=fedora-logo-icon 00:01:45.881 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:45.881 ++ HOME_URL=https://fedoraproject.org/ 00:01:45.881 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:45.881 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:45.881 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:45.881 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:45.881 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:45.881 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:45.881 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:45.881 ++ SUPPORT_END=2024-05-14 00:01:45.881 ++ VARIANT='Cloud Edition' 00:01:45.881 ++ VARIANT_ID=cloud 00:01:45.881 + uname -a 00:01:45.881 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:45.881 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:46.140 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:46.140 Hugepages 00:01:46.140 node hugesize free / total 00:01:46.140 node0 1048576kB 0 / 0 00:01:46.140 node0 2048kB 0 / 0 00:01:46.140 00:01:46.140 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:46.140 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:46.398 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:46.398 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:46.398 + rm -f /tmp/spdk-ld-path 00:01:46.398 + source autorun-spdk.conf 00:01:46.398 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.398 ++ SPDK_TEST_NVMF=1 00:01:46.398 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:46.398 ++ SPDK_TEST_URING=1 00:01:46.398 ++ SPDK_TEST_USDT=1 00:01:46.398 ++ SPDK_RUN_UBSAN=1 00:01:46.398 ++ NET_TYPE=virt 00:01:46.398 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:46.398 ++ RUN_NIGHTLY=0 00:01:46.398 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:46.398 + [[ -n '' ]] 00:01:46.398 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:46.398 + for M in /var/spdk/build-*-manifest.txt 00:01:46.398 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:46.398 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:46.398 + for M in /var/spdk/build-*-manifest.txt 00:01:46.398 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:46.398 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:46.398 ++ uname 00:01:46.398 + [[ Linux == \L\i\n\u\x ]] 00:01:46.398 + sudo dmesg -T 00:01:46.398 + sudo dmesg --clear 00:01:46.398 + dmesg_pid=5105 00:01:46.398 + sudo dmesg -Tw 00:01:46.398 + [[ Fedora Linux == FreeBSD ]] 00:01:46.398 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:46.398 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:46.398 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:46.398 + [[ -x /usr/src/fio-static/fio ]] 00:01:46.398 + export FIO_BIN=/usr/src/fio-static/fio 00:01:46.398 + FIO_BIN=/usr/src/fio-static/fio 00:01:46.398 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:46.398 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:46.398 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:46.398 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:46.398 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:46.398 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:46.398 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:46.398 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:46.398 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:46.398 Test configuration: 00:01:46.398 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.398 SPDK_TEST_NVMF=1 00:01:46.398 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:46.398 SPDK_TEST_URING=1 00:01:46.398 SPDK_TEST_USDT=1 00:01:46.398 SPDK_RUN_UBSAN=1 00:01:46.398 NET_TYPE=virt 00:01:46.398 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:46.657 RUN_NIGHTLY=0 12:57:38 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:46.657 12:57:38 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:46.657 12:57:38 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:46.657 12:57:38 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:46.657 12:57:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:46.657 12:57:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:46.657 12:57:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:46.657 12:57:38 -- paths/export.sh@5 -- $ export PATH 00:01:46.657 12:57:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:46.657 12:57:38 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:46.657 12:57:38 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:46.657 12:57:38 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721912258.XXXXXX 00:01:46.657 12:57:38 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721912258.zMVcqo 00:01:46.657 12:57:38 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:46.657 12:57:38 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:46.657 12:57:38 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:46.657 12:57:38 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:46.657 12:57:38 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:46.657 12:57:38 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:46.657 12:57:38 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:46.657 12:57:38 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.657 12:57:38 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:46.657 12:57:38 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:46.657 12:57:38 -- pm/common@17 -- $ local monitor 00:01:46.657 12:57:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:46.657 12:57:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:46.657 12:57:38 -- pm/common@21 -- $ date +%s 00:01:46.657 12:57:38 -- pm/common@25 -- $ sleep 1 00:01:46.657 12:57:38 -- pm/common@21 -- $ date +%s 00:01:46.657 12:57:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721912258 00:01:46.657 12:57:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721912258 00:01:46.657 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721912258_collect-vmstat.pm.log 00:01:46.657 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721912258_collect-cpu-load.pm.log 00:01:47.591 12:57:39 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:47.591 12:57:39 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:47.591 12:57:39 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:47.591 12:57:39 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:47.591 12:57:39 -- spdk/autobuild.sh@16 -- $ date -u 00:01:47.591 Thu Jul 25 12:57:39 PM UTC 2024 00:01:47.591 12:57:39 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:47.591 v24.09-pre-321-g704257090 00:01:47.591 12:57:39 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:47.591 12:57:39 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:47.591 12:57:39 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:47.591 12:57:39 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:47.591 12:57:39 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:47.591 12:57:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.591 ************************************ 00:01:47.591 START TEST ubsan 00:01:47.591 ************************************ 00:01:47.591 using ubsan 00:01:47.591 12:57:39 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:47.591 00:01:47.591 real 0m0.000s 00:01:47.591 user 0m0.000s 00:01:47.591 sys 0m0.000s 00:01:47.591 12:57:39 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:47.591 12:57:39 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:47.592 ************************************ 00:01:47.592 END TEST ubsan 00:01:47.592 ************************************ 00:01:47.592 12:57:39 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:47.592 12:57:39 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:47.592 12:57:39 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:47.592 12:57:39 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:47.592 12:57:39 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:47.592 12:57:39 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:47.592 12:57:39 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:47.592 12:57:39 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:47.592 12:57:39 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:47.850 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:47.850 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:48.461 Using 'verbs' RDMA provider 00:02:03.924 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:16.124 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:16.124 Creating mk/config.mk...done. 00:02:16.124 Creating mk/cc.flags.mk...done. 00:02:16.124 Type 'make' to build. 00:02:16.124 12:58:06 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:16.124 12:58:06 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:16.124 12:58:06 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:16.124 12:58:06 -- common/autotest_common.sh@10 -- $ set +x 00:02:16.124 ************************************ 00:02:16.124 START TEST make 00:02:16.124 ************************************ 00:02:16.124 12:58:06 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:16.124 make[1]: Nothing to be done for 'all'. 00:02:26.090 The Meson build system 00:02:26.090 Version: 1.3.1 00:02:26.090 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:26.090 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:26.090 Build type: native build 00:02:26.090 Program cat found: YES (/usr/bin/cat) 00:02:26.090 Project name: DPDK 00:02:26.090 Project version: 24.03.0 00:02:26.090 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:26.090 C linker for the host machine: cc ld.bfd 2.39-16 00:02:26.090 Host machine cpu family: x86_64 00:02:26.090 Host machine cpu: x86_64 00:02:26.090 Message: ## Building in Developer Mode ## 00:02:26.090 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:26.090 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:26.090 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:26.090 Program python3 found: YES (/usr/bin/python3) 00:02:26.090 Program cat found: YES (/usr/bin/cat) 00:02:26.090 Compiler for C supports arguments -march=native: YES 00:02:26.090 Checking for size of "void *" : 8 00:02:26.090 Checking for size of "void *" : 8 (cached) 00:02:26.090 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:26.090 Library m found: YES 00:02:26.090 Library numa found: YES 00:02:26.090 Has header "numaif.h" : YES 00:02:26.090 Library fdt found: NO 00:02:26.090 Library execinfo found: NO 00:02:26.090 Has header "execinfo.h" : YES 00:02:26.090 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:26.090 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:26.090 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:26.090 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:26.090 Run-time dependency openssl found: YES 3.0.9 00:02:26.090 Run-time dependency libpcap found: YES 1.10.4 00:02:26.090 Has header "pcap.h" with dependency libpcap: YES 00:02:26.090 Compiler for C supports arguments -Wcast-qual: YES 00:02:26.090 Compiler for C supports arguments -Wdeprecated: YES 00:02:26.090 Compiler for C supports arguments -Wformat: YES 00:02:26.090 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:26.090 Compiler for C supports arguments -Wformat-security: NO 00:02:26.090 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:26.090 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:26.090 Compiler for C supports arguments -Wnested-externs: YES 00:02:26.090 Compiler for C supports arguments -Wold-style-definition: YES 00:02:26.090 Compiler for C supports arguments -Wpointer-arith: YES 00:02:26.090 Compiler for C supports arguments -Wsign-compare: YES 00:02:26.090 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:26.090 Compiler for C supports arguments -Wundef: YES 00:02:26.090 Compiler for C supports arguments -Wwrite-strings: YES 00:02:26.090 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:26.090 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:26.090 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:26.090 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:26.090 Program objdump found: YES (/usr/bin/objdump) 00:02:26.090 Compiler for C supports arguments -mavx512f: YES 00:02:26.090 Checking if "AVX512 checking" compiles: YES 00:02:26.090 Fetching value of define "__SSE4_2__" : 1 00:02:26.090 Fetching value of define "__AES__" : 1 00:02:26.090 Fetching value of define "__AVX__" : 1 00:02:26.090 Fetching value of define "__AVX2__" : 1 00:02:26.090 Fetching value of define "__AVX512BW__" : (undefined) 00:02:26.090 Fetching value of define "__AVX512CD__" : (undefined) 00:02:26.090 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:26.090 Fetching value of define "__AVX512F__" : (undefined) 00:02:26.090 Fetching value of define "__AVX512VL__" : (undefined) 00:02:26.090 Fetching value of define "__PCLMUL__" : 1 00:02:26.090 Fetching value of define "__RDRND__" : 1 00:02:26.090 Fetching value of define "__RDSEED__" : 1 00:02:26.090 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:26.090 Fetching value of define "__znver1__" : (undefined) 00:02:26.090 Fetching value of define "__znver2__" : (undefined) 00:02:26.090 Fetching value of define "__znver3__" : (undefined) 00:02:26.090 Fetching value of define "__znver4__" : (undefined) 00:02:26.090 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:26.090 Message: lib/log: Defining dependency "log" 00:02:26.090 Message: lib/kvargs: Defining dependency "kvargs" 00:02:26.091 Message: lib/telemetry: Defining dependency "telemetry" 00:02:26.091 Checking for function "getentropy" : NO 00:02:26.091 Message: lib/eal: Defining dependency "eal" 00:02:26.091 Message: lib/ring: Defining dependency "ring" 00:02:26.091 Message: lib/rcu: Defining dependency "rcu" 00:02:26.091 Message: lib/mempool: Defining dependency "mempool" 00:02:26.091 Message: lib/mbuf: Defining dependency "mbuf" 00:02:26.091 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:26.091 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:26.091 Compiler for C supports arguments -mpclmul: YES 00:02:26.091 Compiler for C supports arguments -maes: YES 00:02:26.091 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:26.091 Compiler for C supports arguments -mavx512bw: YES 00:02:26.091 Compiler for C supports arguments -mavx512dq: YES 00:02:26.091 Compiler for C supports arguments -mavx512vl: YES 00:02:26.091 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:26.091 Compiler for C supports arguments -mavx2: YES 00:02:26.091 Compiler for C supports arguments -mavx: YES 00:02:26.091 Message: lib/net: Defining dependency "net" 00:02:26.091 Message: lib/meter: Defining dependency "meter" 00:02:26.091 Message: lib/ethdev: Defining dependency "ethdev" 00:02:26.091 Message: lib/pci: Defining dependency "pci" 00:02:26.091 Message: lib/cmdline: Defining dependency "cmdline" 00:02:26.091 Message: lib/hash: Defining dependency "hash" 00:02:26.091 Message: lib/timer: Defining dependency "timer" 00:02:26.091 Message: lib/compressdev: Defining dependency "compressdev" 00:02:26.091 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:26.091 Message: lib/dmadev: Defining dependency "dmadev" 00:02:26.091 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:26.091 Message: lib/power: Defining dependency "power" 00:02:26.091 Message: lib/reorder: Defining dependency "reorder" 00:02:26.091 Message: lib/security: Defining dependency "security" 00:02:26.091 Has header "linux/userfaultfd.h" : YES 00:02:26.091 Has header "linux/vduse.h" : YES 00:02:26.091 Message: lib/vhost: Defining dependency "vhost" 00:02:26.091 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:26.091 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:26.091 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:26.091 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:26.091 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:26.091 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:26.091 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:26.091 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:26.091 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:26.091 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:26.091 Program doxygen found: YES (/usr/bin/doxygen) 00:02:26.091 Configuring doxy-api-html.conf using configuration 00:02:26.091 Configuring doxy-api-man.conf using configuration 00:02:26.091 Program mandb found: YES (/usr/bin/mandb) 00:02:26.091 Program sphinx-build found: NO 00:02:26.091 Configuring rte_build_config.h using configuration 00:02:26.091 Message: 00:02:26.091 ================= 00:02:26.091 Applications Enabled 00:02:26.091 ================= 00:02:26.091 00:02:26.091 apps: 00:02:26.091 00:02:26.091 00:02:26.091 Message: 00:02:26.091 ================= 00:02:26.091 Libraries Enabled 00:02:26.091 ================= 00:02:26.091 00:02:26.091 libs: 00:02:26.091 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:26.091 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:26.091 cryptodev, dmadev, power, reorder, security, vhost, 00:02:26.091 00:02:26.091 Message: 00:02:26.091 =============== 00:02:26.091 Drivers Enabled 00:02:26.091 =============== 00:02:26.091 00:02:26.091 common: 00:02:26.091 00:02:26.091 bus: 00:02:26.091 pci, vdev, 00:02:26.091 mempool: 00:02:26.091 ring, 00:02:26.091 dma: 00:02:26.091 00:02:26.091 net: 00:02:26.091 00:02:26.091 crypto: 00:02:26.091 00:02:26.091 compress: 00:02:26.091 00:02:26.091 vdpa: 00:02:26.091 00:02:26.091 00:02:26.091 Message: 00:02:26.091 ================= 00:02:26.091 Content Skipped 00:02:26.091 ================= 00:02:26.091 00:02:26.091 apps: 00:02:26.091 dumpcap: explicitly disabled via build config 00:02:26.091 graph: explicitly disabled via build config 00:02:26.091 pdump: explicitly disabled via build config 00:02:26.091 proc-info: explicitly disabled via build config 00:02:26.091 test-acl: explicitly disabled via build config 00:02:26.091 test-bbdev: explicitly disabled via build config 00:02:26.091 test-cmdline: explicitly disabled via build config 00:02:26.091 test-compress-perf: explicitly disabled via build config 00:02:26.091 test-crypto-perf: explicitly disabled via build config 00:02:26.091 test-dma-perf: explicitly disabled via build config 00:02:26.091 test-eventdev: explicitly disabled via build config 00:02:26.091 test-fib: explicitly disabled via build config 00:02:26.091 test-flow-perf: explicitly disabled via build config 00:02:26.091 test-gpudev: explicitly disabled via build config 00:02:26.091 test-mldev: explicitly disabled via build config 00:02:26.091 test-pipeline: explicitly disabled via build config 00:02:26.091 test-pmd: explicitly disabled via build config 00:02:26.091 test-regex: explicitly disabled via build config 00:02:26.091 test-sad: explicitly disabled via build config 00:02:26.091 test-security-perf: explicitly disabled via build config 00:02:26.091 00:02:26.091 libs: 00:02:26.091 argparse: explicitly disabled via build config 00:02:26.091 metrics: explicitly disabled via build config 00:02:26.091 acl: explicitly disabled via build config 00:02:26.091 bbdev: explicitly disabled via build config 00:02:26.091 bitratestats: explicitly disabled via build config 00:02:26.091 bpf: explicitly disabled via build config 00:02:26.091 cfgfile: explicitly disabled via build config 00:02:26.091 distributor: explicitly disabled via build config 00:02:26.091 efd: explicitly disabled via build config 00:02:26.091 eventdev: explicitly disabled via build config 00:02:26.091 dispatcher: explicitly disabled via build config 00:02:26.091 gpudev: explicitly disabled via build config 00:02:26.091 gro: explicitly disabled via build config 00:02:26.091 gso: explicitly disabled via build config 00:02:26.091 ip_frag: explicitly disabled via build config 00:02:26.091 jobstats: explicitly disabled via build config 00:02:26.091 latencystats: explicitly disabled via build config 00:02:26.091 lpm: explicitly disabled via build config 00:02:26.091 member: explicitly disabled via build config 00:02:26.091 pcapng: explicitly disabled via build config 00:02:26.091 rawdev: explicitly disabled via build config 00:02:26.091 regexdev: explicitly disabled via build config 00:02:26.091 mldev: explicitly disabled via build config 00:02:26.091 rib: explicitly disabled via build config 00:02:26.091 sched: explicitly disabled via build config 00:02:26.091 stack: explicitly disabled via build config 00:02:26.091 ipsec: explicitly disabled via build config 00:02:26.091 pdcp: explicitly disabled via build config 00:02:26.091 fib: explicitly disabled via build config 00:02:26.091 port: explicitly disabled via build config 00:02:26.091 pdump: explicitly disabled via build config 00:02:26.091 table: explicitly disabled via build config 00:02:26.091 pipeline: explicitly disabled via build config 00:02:26.091 graph: explicitly disabled via build config 00:02:26.091 node: explicitly disabled via build config 00:02:26.091 00:02:26.091 drivers: 00:02:26.091 common/cpt: not in enabled drivers build config 00:02:26.091 common/dpaax: not in enabled drivers build config 00:02:26.091 common/iavf: not in enabled drivers build config 00:02:26.091 common/idpf: not in enabled drivers build config 00:02:26.091 common/ionic: not in enabled drivers build config 00:02:26.091 common/mvep: not in enabled drivers build config 00:02:26.091 common/octeontx: not in enabled drivers build config 00:02:26.091 bus/auxiliary: not in enabled drivers build config 00:02:26.091 bus/cdx: not in enabled drivers build config 00:02:26.091 bus/dpaa: not in enabled drivers build config 00:02:26.091 bus/fslmc: not in enabled drivers build config 00:02:26.091 bus/ifpga: not in enabled drivers build config 00:02:26.091 bus/platform: not in enabled drivers build config 00:02:26.091 bus/uacce: not in enabled drivers build config 00:02:26.091 bus/vmbus: not in enabled drivers build config 00:02:26.091 common/cnxk: not in enabled drivers build config 00:02:26.091 common/mlx5: not in enabled drivers build config 00:02:26.091 common/nfp: not in enabled drivers build config 00:02:26.091 common/nitrox: not in enabled drivers build config 00:02:26.091 common/qat: not in enabled drivers build config 00:02:26.092 common/sfc_efx: not in enabled drivers build config 00:02:26.092 mempool/bucket: not in enabled drivers build config 00:02:26.092 mempool/cnxk: not in enabled drivers build config 00:02:26.092 mempool/dpaa: not in enabled drivers build config 00:02:26.092 mempool/dpaa2: not in enabled drivers build config 00:02:26.092 mempool/octeontx: not in enabled drivers build config 00:02:26.092 mempool/stack: not in enabled drivers build config 00:02:26.092 dma/cnxk: not in enabled drivers build config 00:02:26.092 dma/dpaa: not in enabled drivers build config 00:02:26.092 dma/dpaa2: not in enabled drivers build config 00:02:26.092 dma/hisilicon: not in enabled drivers build config 00:02:26.092 dma/idxd: not in enabled drivers build config 00:02:26.092 dma/ioat: not in enabled drivers build config 00:02:26.092 dma/skeleton: not in enabled drivers build config 00:02:26.092 net/af_packet: not in enabled drivers build config 00:02:26.092 net/af_xdp: not in enabled drivers build config 00:02:26.092 net/ark: not in enabled drivers build config 00:02:26.092 net/atlantic: not in enabled drivers build config 00:02:26.092 net/avp: not in enabled drivers build config 00:02:26.092 net/axgbe: not in enabled drivers build config 00:02:26.092 net/bnx2x: not in enabled drivers build config 00:02:26.092 net/bnxt: not in enabled drivers build config 00:02:26.092 net/bonding: not in enabled drivers build config 00:02:26.092 net/cnxk: not in enabled drivers build config 00:02:26.092 net/cpfl: not in enabled drivers build config 00:02:26.092 net/cxgbe: not in enabled drivers build config 00:02:26.092 net/dpaa: not in enabled drivers build config 00:02:26.092 net/dpaa2: not in enabled drivers build config 00:02:26.092 net/e1000: not in enabled drivers build config 00:02:26.092 net/ena: not in enabled drivers build config 00:02:26.092 net/enetc: not in enabled drivers build config 00:02:26.092 net/enetfec: not in enabled drivers build config 00:02:26.092 net/enic: not in enabled drivers build config 00:02:26.092 net/failsafe: not in enabled drivers build config 00:02:26.092 net/fm10k: not in enabled drivers build config 00:02:26.092 net/gve: not in enabled drivers build config 00:02:26.092 net/hinic: not in enabled drivers build config 00:02:26.092 net/hns3: not in enabled drivers build config 00:02:26.092 net/i40e: not in enabled drivers build config 00:02:26.092 net/iavf: not in enabled drivers build config 00:02:26.092 net/ice: not in enabled drivers build config 00:02:26.092 net/idpf: not in enabled drivers build config 00:02:26.092 net/igc: not in enabled drivers build config 00:02:26.092 net/ionic: not in enabled drivers build config 00:02:26.092 net/ipn3ke: not in enabled drivers build config 00:02:26.092 net/ixgbe: not in enabled drivers build config 00:02:26.092 net/mana: not in enabled drivers build config 00:02:26.092 net/memif: not in enabled drivers build config 00:02:26.092 net/mlx4: not in enabled drivers build config 00:02:26.092 net/mlx5: not in enabled drivers build config 00:02:26.092 net/mvneta: not in enabled drivers build config 00:02:26.092 net/mvpp2: not in enabled drivers build config 00:02:26.092 net/netvsc: not in enabled drivers build config 00:02:26.092 net/nfb: not in enabled drivers build config 00:02:26.092 net/nfp: not in enabled drivers build config 00:02:26.092 net/ngbe: not in enabled drivers build config 00:02:26.092 net/null: not in enabled drivers build config 00:02:26.092 net/octeontx: not in enabled drivers build config 00:02:26.092 net/octeon_ep: not in enabled drivers build config 00:02:26.092 net/pcap: not in enabled drivers build config 00:02:26.092 net/pfe: not in enabled drivers build config 00:02:26.092 net/qede: not in enabled drivers build config 00:02:26.092 net/ring: not in enabled drivers build config 00:02:26.092 net/sfc: not in enabled drivers build config 00:02:26.092 net/softnic: not in enabled drivers build config 00:02:26.092 net/tap: not in enabled drivers build config 00:02:26.092 net/thunderx: not in enabled drivers build config 00:02:26.092 net/txgbe: not in enabled drivers build config 00:02:26.092 net/vdev_netvsc: not in enabled drivers build config 00:02:26.092 net/vhost: not in enabled drivers build config 00:02:26.092 net/virtio: not in enabled drivers build config 00:02:26.092 net/vmxnet3: not in enabled drivers build config 00:02:26.092 raw/*: missing internal dependency, "rawdev" 00:02:26.092 crypto/armv8: not in enabled drivers build config 00:02:26.092 crypto/bcmfs: not in enabled drivers build config 00:02:26.092 crypto/caam_jr: not in enabled drivers build config 00:02:26.092 crypto/ccp: not in enabled drivers build config 00:02:26.092 crypto/cnxk: not in enabled drivers build config 00:02:26.092 crypto/dpaa_sec: not in enabled drivers build config 00:02:26.092 crypto/dpaa2_sec: not in enabled drivers build config 00:02:26.092 crypto/ipsec_mb: not in enabled drivers build config 00:02:26.092 crypto/mlx5: not in enabled drivers build config 00:02:26.092 crypto/mvsam: not in enabled drivers build config 00:02:26.092 crypto/nitrox: not in enabled drivers build config 00:02:26.092 crypto/null: not in enabled drivers build config 00:02:26.092 crypto/octeontx: not in enabled drivers build config 00:02:26.092 crypto/openssl: not in enabled drivers build config 00:02:26.092 crypto/scheduler: not in enabled drivers build config 00:02:26.092 crypto/uadk: not in enabled drivers build config 00:02:26.092 crypto/virtio: not in enabled drivers build config 00:02:26.092 compress/isal: not in enabled drivers build config 00:02:26.092 compress/mlx5: not in enabled drivers build config 00:02:26.092 compress/nitrox: not in enabled drivers build config 00:02:26.092 compress/octeontx: not in enabled drivers build config 00:02:26.092 compress/zlib: not in enabled drivers build config 00:02:26.092 regex/*: missing internal dependency, "regexdev" 00:02:26.092 ml/*: missing internal dependency, "mldev" 00:02:26.092 vdpa/ifc: not in enabled drivers build config 00:02:26.092 vdpa/mlx5: not in enabled drivers build config 00:02:26.092 vdpa/nfp: not in enabled drivers build config 00:02:26.092 vdpa/sfc: not in enabled drivers build config 00:02:26.092 event/*: missing internal dependency, "eventdev" 00:02:26.092 baseband/*: missing internal dependency, "bbdev" 00:02:26.092 gpu/*: missing internal dependency, "gpudev" 00:02:26.092 00:02:26.092 00:02:26.092 Build targets in project: 85 00:02:26.092 00:02:26.092 DPDK 24.03.0 00:02:26.092 00:02:26.092 User defined options 00:02:26.092 buildtype : debug 00:02:26.092 default_library : shared 00:02:26.092 libdir : lib 00:02:26.092 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:26.092 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:26.092 c_link_args : 00:02:26.092 cpu_instruction_set: native 00:02:26.092 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:26.092 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:26.092 enable_docs : false 00:02:26.092 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:26.092 enable_kmods : false 00:02:26.092 max_lcores : 128 00:02:26.092 tests : false 00:02:26.092 00:02:26.092 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:26.092 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:26.092 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:26.092 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:26.092 [3/268] Linking static target lib/librte_kvargs.a 00:02:26.092 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:26.092 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:26.092 [6/268] Linking static target lib/librte_log.a 00:02:26.092 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.092 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:26.350 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:26.350 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:26.350 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:26.350 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:26.609 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:26.609 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:26.609 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:26.609 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:26.609 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:26.609 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.609 [19/268] Linking static target lib/librte_telemetry.a 00:02:26.873 [20/268] Linking target lib/librte_log.so.24.1 00:02:26.873 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:26.873 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:27.131 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:27.131 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:27.131 [25/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:27.389 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:27.389 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:27.389 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:27.389 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:27.389 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.389 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:27.389 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:27.647 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:27.647 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:27.647 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:27.906 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:27.906 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:28.164 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:28.164 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:28.164 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:28.165 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:28.165 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:28.165 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:28.423 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:28.423 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:28.423 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:28.423 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:28.681 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:28.681 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:28.681 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:28.948 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:28.948 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:28.948 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:29.226 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:29.483 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:29.483 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:29.483 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:29.483 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:29.483 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:29.483 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:29.741 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:29.741 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:29.741 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:30.001 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:30.001 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:30.259 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:30.259 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:30.517 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:30.517 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:30.517 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:30.517 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:30.775 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:30.775 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:30.775 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:30.775 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:31.033 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:31.033 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:31.033 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:31.291 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:31.291 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:31.549 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:31.549 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:31.549 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:31.806 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:31.807 [85/268] Linking static target lib/librte_eal.a 00:02:31.807 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:31.807 [87/268] Linking static target lib/librte_ring.a 00:02:32.064 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:32.064 [89/268] Linking static target lib/librte_rcu.a 00:02:32.064 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:32.322 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:32.322 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:32.322 [93/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.322 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:32.322 [95/268] Linking static target lib/librte_mempool.a 00:02:32.580 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:32.580 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:32.580 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.838 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:32.838 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:32.838 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:33.096 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:33.096 [103/268] Linking static target lib/librte_mbuf.a 00:02:33.354 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:33.354 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:33.354 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:33.354 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:33.612 [108/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:33.612 [109/268] Linking static target lib/librte_net.a 00:02:33.871 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.871 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:33.871 [112/268] Linking static target lib/librte_meter.a 00:02:33.871 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:34.129 [114/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.388 [115/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.388 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:34.388 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:34.388 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.388 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:34.955 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:35.213 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:35.213 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:35.213 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:35.472 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:35.472 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:35.472 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:35.472 [127/268] Linking static target lib/librte_pci.a 00:02:35.730 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:35.730 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:35.730 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:35.730 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:35.730 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:35.730 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:35.988 [134/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.988 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:35.988 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:35.988 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:35.988 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:35.988 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:35.988 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:35.988 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:35.988 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:35.988 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:35.988 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:36.262 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:36.262 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:36.529 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:36.529 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:36.529 [149/268] Linking static target lib/librte_cmdline.a 00:02:36.529 [150/268] Linking static target lib/librte_ethdev.a 00:02:36.529 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:36.787 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:36.787 [153/268] Linking static target lib/librte_timer.a 00:02:37.046 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:37.046 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:37.046 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:37.046 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:37.046 [158/268] Linking static target lib/librte_hash.a 00:02:37.046 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:37.304 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:37.304 [161/268] Linking static target lib/librte_compressdev.a 00:02:37.563 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.563 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:37.821 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:37.821 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:37.821 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:37.821 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:38.079 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:38.079 [169/268] Linking static target lib/librte_dmadev.a 00:02:38.079 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.338 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:38.338 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:38.338 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:38.338 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.338 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.338 [176/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:38.338 [177/268] Linking static target lib/librte_cryptodev.a 00:02:38.597 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:38.855 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:38.855 [180/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:39.113 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:39.114 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.114 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:39.114 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:39.114 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:39.114 [186/268] Linking static target lib/librte_power.a 00:02:39.371 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:39.371 [188/268] Linking static target lib/librte_reorder.a 00:02:39.629 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:39.886 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:39.886 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:39.886 [192/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.886 [193/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:39.886 [194/268] Linking static target lib/librte_security.a 00:02:40.144 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:40.401 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.659 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:40.659 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:40.917 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:40.917 [200/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.917 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:41.175 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:41.175 [203/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.175 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:41.433 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:41.433 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:41.433 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:41.433 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:41.691 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:41.691 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:41.691 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:41.691 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:41.950 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:41.950 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:41.950 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:41.950 [216/268] Linking static target drivers/librte_bus_pci.a 00:02:41.950 [217/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:41.950 [218/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:41.950 [219/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:41.950 [220/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:41.950 [221/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:41.950 [222/268] Linking static target drivers/librte_bus_vdev.a 00:02:41.950 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:42.209 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:42.209 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:42.209 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:42.209 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.209 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.143 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:43.401 [230/268] Linking static target lib/librte_vhost.a 00:02:43.659 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.659 [232/268] Linking target lib/librte_eal.so.24.1 00:02:43.917 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:43.917 [234/268] Linking target lib/librte_meter.so.24.1 00:02:43.917 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:43.917 [236/268] Linking target lib/librte_pci.so.24.1 00:02:43.917 [237/268] Linking target lib/librte_timer.so.24.1 00:02:43.917 [238/268] Linking target lib/librte_ring.so.24.1 00:02:43.917 [239/268] Linking target lib/librte_dmadev.so.24.1 00:02:44.175 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:44.175 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:44.175 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:44.175 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:44.175 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:44.175 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:44.175 [246/268] Linking target lib/librte_mempool.so.24.1 00:02:44.175 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:44.433 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:44.433 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:44.433 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:44.433 [251/268] Linking target lib/librte_mbuf.so.24.1 00:02:44.433 [252/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:44.691 [253/268] Linking target lib/librte_compressdev.so.24.1 00:02:44.691 [254/268] Linking target lib/librte_reorder.so.24.1 00:02:44.691 [255/268] Linking target lib/librte_net.so.24.1 00:02:44.691 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:02:44.691 [257/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.691 [258/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.691 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:44.691 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:44.691 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:44.691 [262/268] Linking target lib/librte_security.so.24.1 00:02:44.950 [263/268] Linking target lib/librte_hash.so.24.1 00:02:44.950 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:44.950 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:44.950 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:44.950 [267/268] Linking target lib/librte_power.so.24.1 00:02:45.208 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:45.208 INFO: autodetecting backend as ninja 00:02:45.208 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:46.584 CC lib/log/log.o 00:02:46.584 CC lib/log/log_flags.o 00:02:46.584 CC lib/ut_mock/mock.o 00:02:46.584 CC lib/ut/ut.o 00:02:46.584 CC lib/log/log_deprecated.o 00:02:46.584 LIB libspdk_ut.a 00:02:46.584 LIB libspdk_log.a 00:02:46.584 LIB libspdk_ut_mock.a 00:02:46.584 SO libspdk_ut.so.2.0 00:02:46.584 SO libspdk_ut_mock.so.6.0 00:02:46.584 SO libspdk_log.so.7.0 00:02:46.584 SYMLINK libspdk_ut.so 00:02:46.584 SYMLINK libspdk_ut_mock.so 00:02:46.584 SYMLINK libspdk_log.so 00:02:46.842 CC lib/util/base64.o 00:02:46.842 CC lib/util/bit_array.o 00:02:46.842 CC lib/ioat/ioat.o 00:02:46.842 CC lib/util/cpuset.o 00:02:46.842 CC lib/util/crc16.o 00:02:46.842 CC lib/util/crc32.o 00:02:46.842 CC lib/util/crc32c.o 00:02:46.842 CC lib/dma/dma.o 00:02:46.842 CXX lib/trace_parser/trace.o 00:02:46.842 CC lib/vfio_user/host/vfio_user_pci.o 00:02:46.842 CC lib/util/crc32_ieee.o 00:02:47.101 CC lib/util/crc64.o 00:02:47.101 CC lib/vfio_user/host/vfio_user.o 00:02:47.101 LIB libspdk_dma.a 00:02:47.101 CC lib/util/dif.o 00:02:47.101 CC lib/util/fd.o 00:02:47.101 CC lib/util/fd_group.o 00:02:47.101 SO libspdk_dma.so.4.0 00:02:47.101 CC lib/util/file.o 00:02:47.101 LIB libspdk_ioat.a 00:02:47.101 SO libspdk_ioat.so.7.0 00:02:47.101 SYMLINK libspdk_dma.so 00:02:47.101 CC lib/util/hexlify.o 00:02:47.101 CC lib/util/iov.o 00:02:47.101 CC lib/util/math.o 00:02:47.101 SYMLINK libspdk_ioat.so 00:02:47.101 CC lib/util/net.o 00:02:47.359 CC lib/util/pipe.o 00:02:47.359 LIB libspdk_vfio_user.a 00:02:47.359 CC lib/util/strerror_tls.o 00:02:47.359 CC lib/util/string.o 00:02:47.359 SO libspdk_vfio_user.so.5.0 00:02:47.359 CC lib/util/uuid.o 00:02:47.359 SYMLINK libspdk_vfio_user.so 00:02:47.359 CC lib/util/xor.o 00:02:47.359 CC lib/util/zipf.o 00:02:47.618 LIB libspdk_util.a 00:02:47.618 SO libspdk_util.so.10.0 00:02:47.876 LIB libspdk_trace_parser.a 00:02:47.876 SYMLINK libspdk_util.so 00:02:47.876 SO libspdk_trace_parser.so.5.0 00:02:47.876 SYMLINK libspdk_trace_parser.so 00:02:48.134 CC lib/env_dpdk/env.o 00:02:48.134 CC lib/env_dpdk/memory.o 00:02:48.134 CC lib/conf/conf.o 00:02:48.134 CC lib/env_dpdk/pci.o 00:02:48.134 CC lib/env_dpdk/init.o 00:02:48.134 CC lib/json/json_parse.o 00:02:48.134 CC lib/idxd/idxd.o 00:02:48.134 CC lib/vmd/vmd.o 00:02:48.134 CC lib/rdma_utils/rdma_utils.o 00:02:48.134 CC lib/rdma_provider/common.o 00:02:48.134 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:48.134 LIB libspdk_conf.a 00:02:48.393 CC lib/json/json_util.o 00:02:48.393 SO libspdk_conf.so.6.0 00:02:48.393 LIB libspdk_rdma_utils.a 00:02:48.393 SO libspdk_rdma_utils.so.1.0 00:02:48.393 SYMLINK libspdk_conf.so 00:02:48.393 CC lib/idxd/idxd_user.o 00:02:48.393 SYMLINK libspdk_rdma_utils.so 00:02:48.393 CC lib/idxd/idxd_kernel.o 00:02:48.393 CC lib/vmd/led.o 00:02:48.393 CC lib/env_dpdk/threads.o 00:02:48.393 LIB libspdk_rdma_provider.a 00:02:48.393 SO libspdk_rdma_provider.so.6.0 00:02:48.393 CC lib/json/json_write.o 00:02:48.651 SYMLINK libspdk_rdma_provider.so 00:02:48.651 CC lib/env_dpdk/pci_ioat.o 00:02:48.651 CC lib/env_dpdk/pci_virtio.o 00:02:48.651 CC lib/env_dpdk/pci_vmd.o 00:02:48.651 CC lib/env_dpdk/pci_idxd.o 00:02:48.651 CC lib/env_dpdk/pci_event.o 00:02:48.651 LIB libspdk_idxd.a 00:02:48.651 SO libspdk_idxd.so.12.0 00:02:48.651 LIB libspdk_vmd.a 00:02:48.651 CC lib/env_dpdk/sigbus_handler.o 00:02:48.651 CC lib/env_dpdk/pci_dpdk.o 00:02:48.651 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:48.651 SO libspdk_vmd.so.6.0 00:02:48.651 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:48.651 SYMLINK libspdk_idxd.so 00:02:48.651 SYMLINK libspdk_vmd.so 00:02:48.651 LIB libspdk_json.a 00:02:48.909 SO libspdk_json.so.6.0 00:02:48.909 SYMLINK libspdk_json.so 00:02:49.167 CC lib/jsonrpc/jsonrpc_server.o 00:02:49.167 CC lib/jsonrpc/jsonrpc_client.o 00:02:49.167 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:49.167 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:49.426 LIB libspdk_env_dpdk.a 00:02:49.426 LIB libspdk_jsonrpc.a 00:02:49.426 SO libspdk_jsonrpc.so.6.0 00:02:49.426 SO libspdk_env_dpdk.so.15.0 00:02:49.684 SYMLINK libspdk_jsonrpc.so 00:02:49.684 SYMLINK libspdk_env_dpdk.so 00:02:49.943 CC lib/rpc/rpc.o 00:02:49.943 LIB libspdk_rpc.a 00:02:50.200 SO libspdk_rpc.so.6.0 00:02:50.200 SYMLINK libspdk_rpc.so 00:02:50.458 CC lib/trace/trace.o 00:02:50.458 CC lib/trace/trace_flags.o 00:02:50.458 CC lib/trace/trace_rpc.o 00:02:50.458 CC lib/keyring/keyring.o 00:02:50.458 CC lib/keyring/keyring_rpc.o 00:02:50.458 CC lib/notify/notify.o 00:02:50.458 CC lib/notify/notify_rpc.o 00:02:50.458 LIB libspdk_notify.a 00:02:50.715 SO libspdk_notify.so.6.0 00:02:50.715 LIB libspdk_keyring.a 00:02:50.716 LIB libspdk_trace.a 00:02:50.716 SYMLINK libspdk_notify.so 00:02:50.716 SO libspdk_keyring.so.1.0 00:02:50.716 SO libspdk_trace.so.10.0 00:02:50.716 SYMLINK libspdk_keyring.so 00:02:50.716 SYMLINK libspdk_trace.so 00:02:50.974 CC lib/thread/iobuf.o 00:02:50.974 CC lib/sock/sock.o 00:02:50.974 CC lib/thread/thread.o 00:02:50.974 CC lib/sock/sock_rpc.o 00:02:51.540 LIB libspdk_sock.a 00:02:51.540 SO libspdk_sock.so.10.0 00:02:51.797 SYMLINK libspdk_sock.so 00:02:52.054 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:52.054 CC lib/nvme/nvme_ctrlr.o 00:02:52.054 CC lib/nvme/nvme_fabric.o 00:02:52.054 CC lib/nvme/nvme_ns_cmd.o 00:02:52.054 CC lib/nvme/nvme_ns.o 00:02:52.054 CC lib/nvme/nvme_pcie_common.o 00:02:52.054 CC lib/nvme/nvme_pcie.o 00:02:52.054 CC lib/nvme/nvme.o 00:02:52.054 CC lib/nvme/nvme_qpair.o 00:02:52.619 LIB libspdk_thread.a 00:02:52.619 SO libspdk_thread.so.10.1 00:02:52.877 SYMLINK libspdk_thread.so 00:02:52.877 CC lib/nvme/nvme_quirks.o 00:02:52.877 CC lib/nvme/nvme_transport.o 00:02:52.877 CC lib/nvme/nvme_discovery.o 00:02:52.877 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:53.134 CC lib/accel/accel.o 00:02:53.134 CC lib/blob/blobstore.o 00:02:53.134 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:53.134 CC lib/init/json_config.o 00:02:53.134 CC lib/init/subsystem.o 00:02:53.392 CC lib/init/subsystem_rpc.o 00:02:53.392 CC lib/init/rpc.o 00:02:53.392 CC lib/accel/accel_rpc.o 00:02:53.392 CC lib/accel/accel_sw.o 00:02:53.650 LIB libspdk_init.a 00:02:53.650 CC lib/blob/request.o 00:02:53.650 SO libspdk_init.so.5.0 00:02:53.650 CC lib/blob/zeroes.o 00:02:53.650 CC lib/nvme/nvme_tcp.o 00:02:53.650 SYMLINK libspdk_init.so 00:02:53.650 CC lib/nvme/nvme_opal.o 00:02:53.908 CC lib/blob/blob_bs_dev.o 00:02:53.908 CC lib/virtio/virtio.o 00:02:53.908 CC lib/event/app.o 00:02:53.908 CC lib/virtio/virtio_vhost_user.o 00:02:53.908 CC lib/virtio/virtio_vfio_user.o 00:02:53.908 LIB libspdk_accel.a 00:02:53.908 SO libspdk_accel.so.16.0 00:02:54.166 CC lib/event/reactor.o 00:02:54.166 CC lib/virtio/virtio_pci.o 00:02:54.166 SYMLINK libspdk_accel.so 00:02:54.166 CC lib/nvme/nvme_io_msg.o 00:02:54.166 CC lib/nvme/nvme_poll_group.o 00:02:54.166 CC lib/event/log_rpc.o 00:02:54.166 CC lib/event/app_rpc.o 00:02:54.166 CC lib/nvme/nvme_zns.o 00:02:54.166 CC lib/nvme/nvme_stubs.o 00:02:54.424 CC lib/event/scheduler_static.o 00:02:54.424 LIB libspdk_virtio.a 00:02:54.424 SO libspdk_virtio.so.7.0 00:02:54.424 CC lib/nvme/nvme_auth.o 00:02:54.424 LIB libspdk_event.a 00:02:54.424 SYMLINK libspdk_virtio.so 00:02:54.424 CC lib/nvme/nvme_cuse.o 00:02:54.681 SO libspdk_event.so.14.0 00:02:54.681 SYMLINK libspdk_event.so 00:02:54.681 CC lib/nvme/nvme_rdma.o 00:02:54.939 CC lib/bdev/bdev.o 00:02:54.939 CC lib/bdev/bdev_rpc.o 00:02:54.939 CC lib/bdev/bdev_zone.o 00:02:54.939 CC lib/bdev/part.o 00:02:54.939 CC lib/bdev/scsi_nvme.o 00:02:56.317 LIB libspdk_nvme.a 00:02:56.317 LIB libspdk_blob.a 00:02:56.317 SO libspdk_blob.so.11.0 00:02:56.317 SO libspdk_nvme.so.13.1 00:02:56.317 SYMLINK libspdk_blob.so 00:02:56.575 SYMLINK libspdk_nvme.so 00:02:56.575 CC lib/blobfs/blobfs.o 00:02:56.575 CC lib/lvol/lvol.o 00:02:56.575 CC lib/blobfs/tree.o 00:02:57.509 LIB libspdk_blobfs.a 00:02:57.509 LIB libspdk_lvol.a 00:02:57.509 SO libspdk_blobfs.so.10.0 00:02:57.509 SO libspdk_lvol.so.10.0 00:02:57.509 SYMLINK libspdk_blobfs.so 00:02:57.509 SYMLINK libspdk_lvol.so 00:02:57.767 LIB libspdk_bdev.a 00:02:57.767 SO libspdk_bdev.so.16.0 00:02:58.026 SYMLINK libspdk_bdev.so 00:02:58.285 CC lib/scsi/dev.o 00:02:58.285 CC lib/scsi/lun.o 00:02:58.285 CC lib/scsi/port.o 00:02:58.285 CC lib/scsi/scsi.o 00:02:58.285 CC lib/scsi/scsi_bdev.o 00:02:58.285 CC lib/scsi/scsi_pr.o 00:02:58.285 CC lib/ftl/ftl_core.o 00:02:58.285 CC lib/nbd/nbd.o 00:02:58.285 CC lib/ublk/ublk.o 00:02:58.285 CC lib/nvmf/ctrlr.o 00:02:58.285 CC lib/scsi/scsi_rpc.o 00:02:58.285 CC lib/nbd/nbd_rpc.o 00:02:58.542 CC lib/scsi/task.o 00:02:58.542 CC lib/ublk/ublk_rpc.o 00:02:58.542 CC lib/ftl/ftl_init.o 00:02:58.542 CC lib/nvmf/ctrlr_discovery.o 00:02:58.542 CC lib/ftl/ftl_layout.o 00:02:58.542 CC lib/ftl/ftl_debug.o 00:02:58.542 LIB libspdk_nbd.a 00:02:58.800 CC lib/ftl/ftl_io.o 00:02:58.800 CC lib/nvmf/ctrlr_bdev.o 00:02:58.800 SO libspdk_nbd.so.7.0 00:02:58.800 LIB libspdk_scsi.a 00:02:58.800 CC lib/nvmf/subsystem.o 00:02:58.800 SYMLINK libspdk_nbd.so 00:02:58.800 CC lib/nvmf/nvmf.o 00:02:58.800 SO libspdk_scsi.so.9.0 00:02:58.800 CC lib/ftl/ftl_sb.o 00:02:58.800 LIB libspdk_ublk.a 00:02:58.800 SO libspdk_ublk.so.3.0 00:02:58.800 SYMLINK libspdk_scsi.so 00:02:59.059 SYMLINK libspdk_ublk.so 00:02:59.059 CC lib/nvmf/nvmf_rpc.o 00:02:59.059 CC lib/nvmf/transport.o 00:02:59.059 CC lib/ftl/ftl_l2p.o 00:02:59.059 CC lib/vhost/vhost.o 00:02:59.059 CC lib/iscsi/conn.o 00:02:59.059 CC lib/iscsi/init_grp.o 00:02:59.317 CC lib/ftl/ftl_l2p_flat.o 00:02:59.317 CC lib/ftl/ftl_nv_cache.o 00:02:59.317 CC lib/ftl/ftl_band.o 00:02:59.576 CC lib/vhost/vhost_rpc.o 00:02:59.576 CC lib/nvmf/tcp.o 00:02:59.576 CC lib/vhost/vhost_scsi.o 00:02:59.576 CC lib/iscsi/iscsi.o 00:02:59.834 CC lib/iscsi/md5.o 00:02:59.834 CC lib/iscsi/param.o 00:02:59.834 CC lib/iscsi/portal_grp.o 00:02:59.834 CC lib/iscsi/tgt_node.o 00:02:59.834 CC lib/iscsi/iscsi_subsystem.o 00:03:00.092 CC lib/ftl/ftl_band_ops.o 00:03:00.092 CC lib/ftl/ftl_writer.o 00:03:00.092 CC lib/ftl/ftl_rq.o 00:03:00.092 CC lib/nvmf/stubs.o 00:03:00.351 CC lib/vhost/vhost_blk.o 00:03:00.351 CC lib/iscsi/iscsi_rpc.o 00:03:00.351 CC lib/ftl/ftl_reloc.o 00:03:00.351 CC lib/iscsi/task.o 00:03:00.351 CC lib/nvmf/mdns_server.o 00:03:00.351 CC lib/ftl/ftl_l2p_cache.o 00:03:00.609 CC lib/vhost/rte_vhost_user.o 00:03:00.609 CC lib/nvmf/rdma.o 00:03:00.609 CC lib/nvmf/auth.o 00:03:00.609 CC lib/ftl/ftl_p2l.o 00:03:00.868 CC lib/ftl/mngt/ftl_mngt.o 00:03:00.868 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:00.868 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:01.125 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:01.125 LIB libspdk_iscsi.a 00:03:01.125 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:01.125 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:01.125 SO libspdk_iscsi.so.8.0 00:03:01.125 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:01.125 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:01.125 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:01.386 SYMLINK libspdk_iscsi.so 00:03:01.386 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:01.386 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:01.386 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:01.386 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:01.386 CC lib/ftl/utils/ftl_conf.o 00:03:01.386 CC lib/ftl/utils/ftl_md.o 00:03:01.386 CC lib/ftl/utils/ftl_mempool.o 00:03:01.644 CC lib/ftl/utils/ftl_bitmap.o 00:03:01.644 CC lib/ftl/utils/ftl_property.o 00:03:01.644 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:01.644 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:01.644 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:01.644 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:01.644 LIB libspdk_vhost.a 00:03:01.902 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:01.902 SO libspdk_vhost.so.8.0 00:03:01.902 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:01.902 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:01.902 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:01.902 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:01.902 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:01.902 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:01.902 SYMLINK libspdk_vhost.so 00:03:01.902 CC lib/ftl/base/ftl_base_dev.o 00:03:01.902 CC lib/ftl/base/ftl_base_bdev.o 00:03:01.902 CC lib/ftl/ftl_trace.o 00:03:02.468 LIB libspdk_ftl.a 00:03:02.468 SO libspdk_ftl.so.9.0 00:03:02.726 LIB libspdk_nvmf.a 00:03:02.726 SO libspdk_nvmf.so.19.0 00:03:02.985 SYMLINK libspdk_ftl.so 00:03:02.985 SYMLINK libspdk_nvmf.so 00:03:03.243 CC module/env_dpdk/env_dpdk_rpc.o 00:03:03.243 CC module/sock/posix/posix.o 00:03:03.243 CC module/scheduler/gscheduler/gscheduler.o 00:03:03.243 CC module/sock/uring/uring.o 00:03:03.243 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:03.243 CC module/blob/bdev/blob_bdev.o 00:03:03.243 CC module/accel/error/accel_error.o 00:03:03.243 CC module/keyring/file/keyring.o 00:03:03.243 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:03.243 CC module/accel/ioat/accel_ioat.o 00:03:03.502 LIB libspdk_env_dpdk_rpc.a 00:03:03.502 SO libspdk_env_dpdk_rpc.so.6.0 00:03:03.502 LIB libspdk_scheduler_gscheduler.a 00:03:03.502 CC module/keyring/file/keyring_rpc.o 00:03:03.502 SYMLINK libspdk_env_dpdk_rpc.so 00:03:03.502 CC module/accel/ioat/accel_ioat_rpc.o 00:03:03.502 LIB libspdk_scheduler_dpdk_governor.a 00:03:03.502 SO libspdk_scheduler_gscheduler.so.4.0 00:03:03.502 CC module/accel/error/accel_error_rpc.o 00:03:03.502 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:03.502 LIB libspdk_scheduler_dynamic.a 00:03:03.502 SO libspdk_scheduler_dynamic.so.4.0 00:03:03.502 SYMLINK libspdk_scheduler_gscheduler.so 00:03:03.502 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:03.502 LIB libspdk_blob_bdev.a 00:03:03.760 LIB libspdk_keyring_file.a 00:03:03.760 SO libspdk_blob_bdev.so.11.0 00:03:03.760 LIB libspdk_accel_ioat.a 00:03:03.760 SYMLINK libspdk_scheduler_dynamic.so 00:03:03.760 SO libspdk_keyring_file.so.1.0 00:03:03.760 LIB libspdk_accel_error.a 00:03:03.760 SO libspdk_accel_ioat.so.6.0 00:03:03.760 SYMLINK libspdk_blob_bdev.so 00:03:03.760 SO libspdk_accel_error.so.2.0 00:03:03.760 SYMLINK libspdk_keyring_file.so 00:03:03.760 SYMLINK libspdk_accel_ioat.so 00:03:03.760 CC module/accel/dsa/accel_dsa.o 00:03:03.760 CC module/accel/dsa/accel_dsa_rpc.o 00:03:03.760 CC module/keyring/linux/keyring.o 00:03:03.760 CC module/keyring/linux/keyring_rpc.o 00:03:03.760 SYMLINK libspdk_accel_error.so 00:03:03.760 CC module/accel/iaa/accel_iaa.o 00:03:03.760 CC module/accel/iaa/accel_iaa_rpc.o 00:03:04.018 LIB libspdk_keyring_linux.a 00:03:04.018 SO libspdk_keyring_linux.so.1.0 00:03:04.018 CC module/blobfs/bdev/blobfs_bdev.o 00:03:04.018 CC module/bdev/delay/vbdev_delay.o 00:03:04.018 LIB libspdk_accel_iaa.a 00:03:04.018 LIB libspdk_accel_dsa.a 00:03:04.018 LIB libspdk_sock_uring.a 00:03:04.018 SYMLINK libspdk_keyring_linux.so 00:03:04.018 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:04.018 SO libspdk_sock_uring.so.5.0 00:03:04.018 SO libspdk_accel_dsa.so.5.0 00:03:04.018 SO libspdk_accel_iaa.so.3.0 00:03:04.018 CC module/bdev/error/vbdev_error.o 00:03:04.018 CC module/bdev/gpt/gpt.o 00:03:04.018 SYMLINK libspdk_sock_uring.so 00:03:04.018 CC module/bdev/lvol/vbdev_lvol.o 00:03:04.018 SYMLINK libspdk_accel_dsa.so 00:03:04.277 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:04.277 SYMLINK libspdk_accel_iaa.so 00:03:04.277 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:04.277 LIB libspdk_sock_posix.a 00:03:04.277 CC module/bdev/gpt/vbdev_gpt.o 00:03:04.277 SO libspdk_sock_posix.so.6.0 00:03:04.277 CC module/bdev/malloc/bdev_malloc.o 00:03:04.277 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:04.277 LIB libspdk_bdev_delay.a 00:03:04.277 SYMLINK libspdk_sock_posix.so 00:03:04.277 LIB libspdk_blobfs_bdev.a 00:03:04.277 CC module/bdev/error/vbdev_error_rpc.o 00:03:04.277 CC module/bdev/null/bdev_null.o 00:03:04.535 SO libspdk_bdev_delay.so.6.0 00:03:04.535 SO libspdk_blobfs_bdev.so.6.0 00:03:04.535 CC module/bdev/null/bdev_null_rpc.o 00:03:04.535 SYMLINK libspdk_bdev_delay.so 00:03:04.535 SYMLINK libspdk_blobfs_bdev.so 00:03:04.535 LIB libspdk_bdev_gpt.a 00:03:04.535 LIB libspdk_bdev_error.a 00:03:04.535 SO libspdk_bdev_gpt.so.6.0 00:03:04.535 SO libspdk_bdev_error.so.6.0 00:03:04.793 SYMLINK libspdk_bdev_gpt.so 00:03:04.793 CC module/bdev/nvme/bdev_nvme.o 00:03:04.793 LIB libspdk_bdev_lvol.a 00:03:04.793 LIB libspdk_bdev_null.a 00:03:04.793 SYMLINK libspdk_bdev_error.so 00:03:04.793 CC module/bdev/split/vbdev_split.o 00:03:04.793 CC module/bdev/raid/bdev_raid.o 00:03:04.793 CC module/bdev/passthru/vbdev_passthru.o 00:03:04.793 LIB libspdk_bdev_malloc.a 00:03:04.793 SO libspdk_bdev_null.so.6.0 00:03:04.793 SO libspdk_bdev_lvol.so.6.0 00:03:04.793 SO libspdk_bdev_malloc.so.6.0 00:03:04.793 SYMLINK libspdk_bdev_null.so 00:03:04.793 SYMLINK libspdk_bdev_lvol.so 00:03:04.793 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:04.793 SYMLINK libspdk_bdev_malloc.so 00:03:04.793 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:04.793 CC module/bdev/uring/bdev_uring.o 00:03:04.793 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:04.793 CC module/bdev/aio/bdev_aio.o 00:03:05.052 CC module/bdev/split/vbdev_split_rpc.o 00:03:05.052 CC module/bdev/ftl/bdev_ftl.o 00:03:05.052 CC module/bdev/uring/bdev_uring_rpc.o 00:03:05.052 LIB libspdk_bdev_passthru.a 00:03:05.052 SO libspdk_bdev_passthru.so.6.0 00:03:05.052 LIB libspdk_bdev_split.a 00:03:05.052 SO libspdk_bdev_split.so.6.0 00:03:05.310 SYMLINK libspdk_bdev_passthru.so 00:03:05.310 CC module/bdev/aio/bdev_aio_rpc.o 00:03:05.310 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:05.310 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:05.310 CC module/bdev/raid/bdev_raid_rpc.o 00:03:05.310 SYMLINK libspdk_bdev_split.so 00:03:05.310 LIB libspdk_bdev_uring.a 00:03:05.310 SO libspdk_bdev_uring.so.6.0 00:03:05.310 CC module/bdev/raid/bdev_raid_sb.o 00:03:05.310 SYMLINK libspdk_bdev_uring.so 00:03:05.310 LIB libspdk_bdev_aio.a 00:03:05.310 LIB libspdk_bdev_zone_block.a 00:03:05.310 CC module/bdev/iscsi/bdev_iscsi.o 00:03:05.310 SO libspdk_bdev_aio.so.6.0 00:03:05.310 LIB libspdk_bdev_ftl.a 00:03:05.310 SO libspdk_bdev_zone_block.so.6.0 00:03:05.568 CC module/bdev/raid/raid0.o 00:03:05.568 CC module/bdev/nvme/nvme_rpc.o 00:03:05.568 SO libspdk_bdev_ftl.so.6.0 00:03:05.568 SYMLINK libspdk_bdev_aio.so 00:03:05.568 SYMLINK libspdk_bdev_zone_block.so 00:03:05.568 CC module/bdev/raid/raid1.o 00:03:05.568 CC module/bdev/raid/concat.o 00:03:05.568 SYMLINK libspdk_bdev_ftl.so 00:03:05.569 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:05.569 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:05.569 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:05.569 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:05.827 CC module/bdev/nvme/bdev_mdns_client.o 00:03:05.827 CC module/bdev/nvme/vbdev_opal.o 00:03:05.827 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:05.827 LIB libspdk_bdev_raid.a 00:03:05.827 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:05.827 SO libspdk_bdev_raid.so.6.0 00:03:05.827 LIB libspdk_bdev_iscsi.a 00:03:05.827 SO libspdk_bdev_iscsi.so.6.0 00:03:05.827 SYMLINK libspdk_bdev_raid.so 00:03:06.085 SYMLINK libspdk_bdev_iscsi.so 00:03:06.085 LIB libspdk_bdev_virtio.a 00:03:06.085 SO libspdk_bdev_virtio.so.6.0 00:03:06.343 SYMLINK libspdk_bdev_virtio.so 00:03:06.909 LIB libspdk_bdev_nvme.a 00:03:06.909 SO libspdk_bdev_nvme.so.7.0 00:03:06.909 SYMLINK libspdk_bdev_nvme.so 00:03:07.475 CC module/event/subsystems/vmd/vmd.o 00:03:07.475 CC module/event/subsystems/scheduler/scheduler.o 00:03:07.475 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:07.475 CC module/event/subsystems/keyring/keyring.o 00:03:07.475 CC module/event/subsystems/iobuf/iobuf.o 00:03:07.475 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:07.475 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:07.475 CC module/event/subsystems/sock/sock.o 00:03:07.733 LIB libspdk_event_scheduler.a 00:03:07.733 LIB libspdk_event_keyring.a 00:03:07.733 LIB libspdk_event_vhost_blk.a 00:03:07.733 LIB libspdk_event_vmd.a 00:03:07.733 LIB libspdk_event_sock.a 00:03:07.733 SO libspdk_event_scheduler.so.4.0 00:03:07.733 SO libspdk_event_keyring.so.1.0 00:03:07.733 LIB libspdk_event_iobuf.a 00:03:07.733 SO libspdk_event_vhost_blk.so.3.0 00:03:07.733 SO libspdk_event_sock.so.5.0 00:03:07.733 SO libspdk_event_vmd.so.6.0 00:03:07.733 SYMLINK libspdk_event_scheduler.so 00:03:07.733 SO libspdk_event_iobuf.so.3.0 00:03:07.733 SYMLINK libspdk_event_keyring.so 00:03:07.733 SYMLINK libspdk_event_vhost_blk.so 00:03:07.733 SYMLINK libspdk_event_sock.so 00:03:07.733 SYMLINK libspdk_event_vmd.so 00:03:07.733 SYMLINK libspdk_event_iobuf.so 00:03:07.991 CC module/event/subsystems/accel/accel.o 00:03:08.249 LIB libspdk_event_accel.a 00:03:08.249 SO libspdk_event_accel.so.6.0 00:03:08.507 SYMLINK libspdk_event_accel.so 00:03:08.767 CC module/event/subsystems/bdev/bdev.o 00:03:08.767 LIB libspdk_event_bdev.a 00:03:09.025 SO libspdk_event_bdev.so.6.0 00:03:09.025 SYMLINK libspdk_event_bdev.so 00:03:09.283 CC module/event/subsystems/ublk/ublk.o 00:03:09.283 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:09.283 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:09.283 CC module/event/subsystems/scsi/scsi.o 00:03:09.283 CC module/event/subsystems/nbd/nbd.o 00:03:09.283 LIB libspdk_event_ublk.a 00:03:09.283 LIB libspdk_event_nbd.a 00:03:09.540 SO libspdk_event_ublk.so.3.0 00:03:09.541 LIB libspdk_event_scsi.a 00:03:09.541 SO libspdk_event_nbd.so.6.0 00:03:09.541 SO libspdk_event_scsi.so.6.0 00:03:09.541 SYMLINK libspdk_event_ublk.so 00:03:09.541 SYMLINK libspdk_event_nbd.so 00:03:09.541 LIB libspdk_event_nvmf.a 00:03:09.541 SYMLINK libspdk_event_scsi.so 00:03:09.541 SO libspdk_event_nvmf.so.6.0 00:03:09.541 SYMLINK libspdk_event_nvmf.so 00:03:09.798 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:09.798 CC module/event/subsystems/iscsi/iscsi.o 00:03:10.056 LIB libspdk_event_vhost_scsi.a 00:03:10.056 LIB libspdk_event_iscsi.a 00:03:10.056 SO libspdk_event_vhost_scsi.so.3.0 00:03:10.056 SO libspdk_event_iscsi.so.6.0 00:03:10.056 SYMLINK libspdk_event_vhost_scsi.so 00:03:10.056 SYMLINK libspdk_event_iscsi.so 00:03:10.315 SO libspdk.so.6.0 00:03:10.315 SYMLINK libspdk.so 00:03:10.574 CXX app/trace/trace.o 00:03:10.574 CC app/trace_record/trace_record.o 00:03:10.574 TEST_HEADER include/spdk/accel.h 00:03:10.574 TEST_HEADER include/spdk/accel_module.h 00:03:10.574 TEST_HEADER include/spdk/assert.h 00:03:10.574 TEST_HEADER include/spdk/barrier.h 00:03:10.574 TEST_HEADER include/spdk/base64.h 00:03:10.574 TEST_HEADER include/spdk/bdev.h 00:03:10.574 TEST_HEADER include/spdk/bdev_module.h 00:03:10.574 TEST_HEADER include/spdk/bdev_zone.h 00:03:10.574 TEST_HEADER include/spdk/bit_array.h 00:03:10.574 TEST_HEADER include/spdk/bit_pool.h 00:03:10.574 TEST_HEADER include/spdk/blob_bdev.h 00:03:10.574 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:10.574 TEST_HEADER include/spdk/blobfs.h 00:03:10.574 TEST_HEADER include/spdk/blob.h 00:03:10.574 TEST_HEADER include/spdk/conf.h 00:03:10.574 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:10.574 TEST_HEADER include/spdk/config.h 00:03:10.574 TEST_HEADER include/spdk/cpuset.h 00:03:10.574 TEST_HEADER include/spdk/crc16.h 00:03:10.574 TEST_HEADER include/spdk/crc32.h 00:03:10.574 TEST_HEADER include/spdk/crc64.h 00:03:10.574 TEST_HEADER include/spdk/dif.h 00:03:10.574 TEST_HEADER include/spdk/dma.h 00:03:10.574 TEST_HEADER include/spdk/endian.h 00:03:10.574 TEST_HEADER include/spdk/env_dpdk.h 00:03:10.574 TEST_HEADER include/spdk/env.h 00:03:10.574 TEST_HEADER include/spdk/event.h 00:03:10.574 TEST_HEADER include/spdk/fd_group.h 00:03:10.574 TEST_HEADER include/spdk/fd.h 00:03:10.574 TEST_HEADER include/spdk/file.h 00:03:10.574 TEST_HEADER include/spdk/ftl.h 00:03:10.574 TEST_HEADER include/spdk/gpt_spec.h 00:03:10.574 TEST_HEADER include/spdk/hexlify.h 00:03:10.575 TEST_HEADER include/spdk/histogram_data.h 00:03:10.575 TEST_HEADER include/spdk/idxd.h 00:03:10.575 TEST_HEADER include/spdk/idxd_spec.h 00:03:10.575 CC examples/util/zipf/zipf.o 00:03:10.575 TEST_HEADER include/spdk/init.h 00:03:10.575 CC examples/ioat/perf/perf.o 00:03:10.575 TEST_HEADER include/spdk/ioat.h 00:03:10.575 TEST_HEADER include/spdk/ioat_spec.h 00:03:10.575 TEST_HEADER include/spdk/iscsi_spec.h 00:03:10.575 TEST_HEADER include/spdk/json.h 00:03:10.575 TEST_HEADER include/spdk/jsonrpc.h 00:03:10.575 TEST_HEADER include/spdk/keyring.h 00:03:10.575 TEST_HEADER include/spdk/keyring_module.h 00:03:10.575 TEST_HEADER include/spdk/likely.h 00:03:10.575 TEST_HEADER include/spdk/log.h 00:03:10.575 TEST_HEADER include/spdk/lvol.h 00:03:10.575 TEST_HEADER include/spdk/memory.h 00:03:10.575 TEST_HEADER include/spdk/mmio.h 00:03:10.575 TEST_HEADER include/spdk/nbd.h 00:03:10.575 TEST_HEADER include/spdk/net.h 00:03:10.575 TEST_HEADER include/spdk/notify.h 00:03:10.575 TEST_HEADER include/spdk/nvme.h 00:03:10.575 TEST_HEADER include/spdk/nvme_intel.h 00:03:10.575 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:10.575 CC test/thread/poller_perf/poller_perf.o 00:03:10.575 CC test/dma/test_dma/test_dma.o 00:03:10.575 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:10.575 TEST_HEADER include/spdk/nvme_spec.h 00:03:10.575 TEST_HEADER include/spdk/nvme_zns.h 00:03:10.575 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:10.575 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:10.575 TEST_HEADER include/spdk/nvmf.h 00:03:10.575 TEST_HEADER include/spdk/nvmf_spec.h 00:03:10.575 TEST_HEADER include/spdk/nvmf_transport.h 00:03:10.575 TEST_HEADER include/spdk/opal.h 00:03:10.575 TEST_HEADER include/spdk/opal_spec.h 00:03:10.575 TEST_HEADER include/spdk/pci_ids.h 00:03:10.575 CC test/app/bdev_svc/bdev_svc.o 00:03:10.575 TEST_HEADER include/spdk/pipe.h 00:03:10.575 TEST_HEADER include/spdk/queue.h 00:03:10.575 TEST_HEADER include/spdk/reduce.h 00:03:10.575 TEST_HEADER include/spdk/rpc.h 00:03:10.575 TEST_HEADER include/spdk/scheduler.h 00:03:10.575 TEST_HEADER include/spdk/scsi.h 00:03:10.575 TEST_HEADER include/spdk/scsi_spec.h 00:03:10.575 TEST_HEADER include/spdk/sock.h 00:03:10.575 TEST_HEADER include/spdk/stdinc.h 00:03:10.575 TEST_HEADER include/spdk/string.h 00:03:10.575 TEST_HEADER include/spdk/thread.h 00:03:10.575 TEST_HEADER include/spdk/trace.h 00:03:10.575 TEST_HEADER include/spdk/trace_parser.h 00:03:10.575 TEST_HEADER include/spdk/tree.h 00:03:10.575 TEST_HEADER include/spdk/ublk.h 00:03:10.575 TEST_HEADER include/spdk/util.h 00:03:10.833 TEST_HEADER include/spdk/uuid.h 00:03:10.833 TEST_HEADER include/spdk/version.h 00:03:10.833 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:10.833 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:10.833 CC test/env/mem_callbacks/mem_callbacks.o 00:03:10.833 TEST_HEADER include/spdk/vhost.h 00:03:10.833 TEST_HEADER include/spdk/vmd.h 00:03:10.833 TEST_HEADER include/spdk/xor.h 00:03:10.833 TEST_HEADER include/spdk/zipf.h 00:03:10.833 CXX test/cpp_headers/accel.o 00:03:10.833 LINK zipf 00:03:10.833 LINK interrupt_tgt 00:03:10.833 LINK spdk_trace_record 00:03:10.833 LINK poller_perf 00:03:10.833 LINK ioat_perf 00:03:10.833 LINK bdev_svc 00:03:10.833 CXX test/cpp_headers/accel_module.o 00:03:11.092 CXX test/cpp_headers/assert.o 00:03:11.092 LINK spdk_trace 00:03:11.092 CC test/env/vtophys/vtophys.o 00:03:11.092 LINK test_dma 00:03:11.092 CC test/rpc_client/rpc_client_test.o 00:03:11.092 CC examples/ioat/verify/verify.o 00:03:11.092 CXX test/cpp_headers/barrier.o 00:03:11.092 CC test/app/histogram_perf/histogram_perf.o 00:03:11.092 LINK vtophys 00:03:11.348 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:11.348 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:11.348 CC app/nvmf_tgt/nvmf_main.o 00:03:11.348 LINK rpc_client_test 00:03:11.348 CXX test/cpp_headers/base64.o 00:03:11.348 LINK verify 00:03:11.348 LINK mem_callbacks 00:03:11.348 LINK histogram_perf 00:03:11.348 CC app/iscsi_tgt/iscsi_tgt.o 00:03:11.348 LINK nvmf_tgt 00:03:11.606 CXX test/cpp_headers/bdev.o 00:03:11.606 CXX test/cpp_headers/bdev_module.o 00:03:11.606 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:11.606 CC examples/thread/thread/thread_ex.o 00:03:11.606 LINK iscsi_tgt 00:03:11.606 CC examples/sock/hello_world/hello_sock.o 00:03:11.606 LINK nvme_fuzz 00:03:11.606 CC app/spdk_tgt/spdk_tgt.o 00:03:11.864 CXX test/cpp_headers/bdev_zone.o 00:03:11.864 LINK env_dpdk_post_init 00:03:11.864 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:11.864 LINK thread 00:03:11.864 LINK hello_sock 00:03:11.864 CC examples/vmd/lsvmd/lsvmd.o 00:03:11.864 LINK spdk_tgt 00:03:11.864 CC test/env/memory/memory_ut.o 00:03:11.864 CXX test/cpp_headers/bit_array.o 00:03:12.123 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:12.123 CC test/env/pci/pci_ut.o 00:03:12.123 LINK lsvmd 00:03:12.123 CXX test/cpp_headers/bit_pool.o 00:03:12.123 CC examples/vmd/led/led.o 00:03:12.123 CC examples/idxd/perf/perf.o 00:03:12.381 CC app/spdk_lspci/spdk_lspci.o 00:03:12.381 CC examples/nvme/hello_world/hello_world.o 00:03:12.381 CXX test/cpp_headers/blob_bdev.o 00:03:12.381 LINK led 00:03:12.381 CC examples/nvme/reconnect/reconnect.o 00:03:12.381 LINK vhost_fuzz 00:03:12.381 LINK pci_ut 00:03:12.381 LINK spdk_lspci 00:03:12.639 CXX test/cpp_headers/blobfs_bdev.o 00:03:12.639 LINK hello_world 00:03:12.639 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:12.639 LINK idxd_perf 00:03:12.639 CC app/spdk_nvme_perf/perf.o 00:03:12.639 LINK reconnect 00:03:12.639 CC examples/nvme/arbitration/arbitration.o 00:03:12.639 CXX test/cpp_headers/blobfs.o 00:03:12.639 CXX test/cpp_headers/blob.o 00:03:12.897 CC examples/nvme/hotplug/hotplug.o 00:03:12.897 CC examples/accel/perf/accel_perf.o 00:03:12.897 CXX test/cpp_headers/conf.o 00:03:12.897 CC app/spdk_nvme_identify/identify.o 00:03:12.897 LINK iscsi_fuzz 00:03:12.897 CC app/spdk_nvme_discover/discovery_aer.o 00:03:12.897 LINK hotplug 00:03:13.156 LINK arbitration 00:03:13.156 LINK nvme_manage 00:03:13.156 LINK memory_ut 00:03:13.156 CXX test/cpp_headers/config.o 00:03:13.156 CXX test/cpp_headers/cpuset.o 00:03:13.156 LINK spdk_nvme_discover 00:03:13.156 CXX test/cpp_headers/crc16.o 00:03:13.156 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:13.463 CC app/spdk_top/spdk_top.o 00:03:13.463 LINK accel_perf 00:03:13.463 CC test/app/jsoncat/jsoncat.o 00:03:13.463 CXX test/cpp_headers/crc32.o 00:03:13.463 LINK cmb_copy 00:03:13.463 LINK jsoncat 00:03:13.463 CC test/event/event_perf/event_perf.o 00:03:13.463 CC app/vhost/vhost.o 00:03:13.722 LINK spdk_nvme_perf 00:03:13.722 CC examples/blob/hello_world/hello_blob.o 00:03:13.722 CXX test/cpp_headers/crc64.o 00:03:13.722 CC examples/nvme/abort/abort.o 00:03:13.722 LINK event_perf 00:03:13.722 LINK vhost 00:03:13.722 CC test/app/stub/stub.o 00:03:13.722 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:13.722 LINK spdk_nvme_identify 00:03:13.722 CXX test/cpp_headers/dif.o 00:03:13.722 LINK hello_blob 00:03:13.980 CC app/spdk_dd/spdk_dd.o 00:03:13.980 CC test/event/reactor/reactor.o 00:03:13.980 LINK stub 00:03:13.980 LINK pmr_persistence 00:03:13.980 CXX test/cpp_headers/dma.o 00:03:13.980 LINK abort 00:03:13.980 CC test/event/reactor_perf/reactor_perf.o 00:03:13.980 CC test/event/app_repeat/app_repeat.o 00:03:13.980 LINK reactor 00:03:14.238 CC examples/blob/cli/blobcli.o 00:03:14.238 CXX test/cpp_headers/endian.o 00:03:14.238 LINK spdk_top 00:03:14.238 CXX test/cpp_headers/env_dpdk.o 00:03:14.238 LINK reactor_perf 00:03:14.238 LINK app_repeat 00:03:14.238 CXX test/cpp_headers/env.o 00:03:14.238 CC test/nvme/aer/aer.o 00:03:14.238 CC examples/bdev/hello_world/hello_bdev.o 00:03:14.238 CXX test/cpp_headers/event.o 00:03:14.496 LINK spdk_dd 00:03:14.496 CXX test/cpp_headers/fd_group.o 00:03:14.496 CC app/fio/nvme/fio_plugin.o 00:03:14.496 CC examples/bdev/bdevperf/bdevperf.o 00:03:14.496 CC test/event/scheduler/scheduler.o 00:03:14.496 LINK aer 00:03:14.496 LINK hello_bdev 00:03:14.496 CXX test/cpp_headers/fd.o 00:03:14.754 CC test/accel/dif/dif.o 00:03:14.754 CC test/nvme/reset/reset.o 00:03:14.754 LINK blobcli 00:03:14.754 CXX test/cpp_headers/file.o 00:03:14.754 CXX test/cpp_headers/ftl.o 00:03:14.754 LINK scheduler 00:03:14.754 CC test/blobfs/mkfs/mkfs.o 00:03:15.013 CXX test/cpp_headers/gpt_spec.o 00:03:15.013 LINK reset 00:03:15.013 CC test/lvol/esnap/esnap.o 00:03:15.013 CC test/nvme/sgl/sgl.o 00:03:15.013 LINK dif 00:03:15.013 CXX test/cpp_headers/hexlify.o 00:03:15.013 LINK mkfs 00:03:15.013 LINK spdk_nvme 00:03:15.271 CC test/nvme/e2edp/nvme_dp.o 00:03:15.271 CC test/nvme/overhead/overhead.o 00:03:15.271 CC app/fio/bdev/fio_plugin.o 00:03:15.271 CXX test/cpp_headers/histogram_data.o 00:03:15.271 CC test/nvme/err_injection/err_injection.o 00:03:15.271 LINK bdevperf 00:03:15.271 LINK sgl 00:03:15.271 CC test/nvme/startup/startup.o 00:03:15.529 CXX test/cpp_headers/idxd.o 00:03:15.529 LINK nvme_dp 00:03:15.529 LINK overhead 00:03:15.529 LINK err_injection 00:03:15.529 CC test/bdev/bdevio/bdevio.o 00:03:15.529 LINK startup 00:03:15.529 CC test/nvme/reserve/reserve.o 00:03:15.529 CXX test/cpp_headers/idxd_spec.o 00:03:15.787 CXX test/cpp_headers/init.o 00:03:15.787 CC test/nvme/simple_copy/simple_copy.o 00:03:15.787 LINK spdk_bdev 00:03:15.787 CXX test/cpp_headers/ioat.o 00:03:15.787 CC test/nvme/connect_stress/connect_stress.o 00:03:15.787 CC examples/nvmf/nvmf/nvmf.o 00:03:15.787 CXX test/cpp_headers/ioat_spec.o 00:03:15.787 LINK reserve 00:03:15.787 CXX test/cpp_headers/iscsi_spec.o 00:03:16.045 CXX test/cpp_headers/json.o 00:03:16.045 LINK connect_stress 00:03:16.045 LINK bdevio 00:03:16.045 CC test/nvme/boot_partition/boot_partition.o 00:03:16.045 LINK simple_copy 00:03:16.045 CXX test/cpp_headers/jsonrpc.o 00:03:16.045 CXX test/cpp_headers/keyring.o 00:03:16.045 CC test/nvme/compliance/nvme_compliance.o 00:03:16.045 CXX test/cpp_headers/keyring_module.o 00:03:16.045 CC test/nvme/fused_ordering/fused_ordering.o 00:03:16.045 LINK nvmf 00:03:16.045 LINK boot_partition 00:03:16.045 CXX test/cpp_headers/likely.o 00:03:16.045 CXX test/cpp_headers/log.o 00:03:16.304 CXX test/cpp_headers/lvol.o 00:03:16.304 CXX test/cpp_headers/memory.o 00:03:16.304 LINK fused_ordering 00:03:16.304 CXX test/cpp_headers/mmio.o 00:03:16.304 CXX test/cpp_headers/nbd.o 00:03:16.304 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:16.304 CXX test/cpp_headers/net.o 00:03:16.304 CC test/nvme/fdp/fdp.o 00:03:16.304 LINK nvme_compliance 00:03:16.562 CC test/nvme/cuse/cuse.o 00:03:16.562 CXX test/cpp_headers/notify.o 00:03:16.562 CXX test/cpp_headers/nvme.o 00:03:16.562 CXX test/cpp_headers/nvme_intel.o 00:03:16.562 CXX test/cpp_headers/nvme_ocssd.o 00:03:16.562 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:16.562 LINK doorbell_aers 00:03:16.562 CXX test/cpp_headers/nvme_spec.o 00:03:16.562 CXX test/cpp_headers/nvme_zns.o 00:03:16.562 CXX test/cpp_headers/nvmf_cmd.o 00:03:16.562 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:16.820 CXX test/cpp_headers/nvmf.o 00:03:16.820 CXX test/cpp_headers/nvmf_spec.o 00:03:16.820 LINK fdp 00:03:16.820 CXX test/cpp_headers/nvmf_transport.o 00:03:16.820 CXX test/cpp_headers/opal.o 00:03:16.820 CXX test/cpp_headers/opal_spec.o 00:03:16.820 CXX test/cpp_headers/pci_ids.o 00:03:16.820 CXX test/cpp_headers/pipe.o 00:03:16.820 CXX test/cpp_headers/queue.o 00:03:16.820 CXX test/cpp_headers/reduce.o 00:03:16.820 CXX test/cpp_headers/rpc.o 00:03:16.820 CXX test/cpp_headers/scheduler.o 00:03:17.078 CXX test/cpp_headers/scsi.o 00:03:17.078 CXX test/cpp_headers/scsi_spec.o 00:03:17.078 CXX test/cpp_headers/sock.o 00:03:17.078 CXX test/cpp_headers/stdinc.o 00:03:17.078 CXX test/cpp_headers/string.o 00:03:17.078 CXX test/cpp_headers/thread.o 00:03:17.078 CXX test/cpp_headers/trace.o 00:03:17.078 CXX test/cpp_headers/trace_parser.o 00:03:17.078 CXX test/cpp_headers/tree.o 00:03:17.078 CXX test/cpp_headers/ublk.o 00:03:17.078 CXX test/cpp_headers/util.o 00:03:17.078 CXX test/cpp_headers/uuid.o 00:03:17.078 CXX test/cpp_headers/version.o 00:03:17.078 CXX test/cpp_headers/vfio_user_pci.o 00:03:17.337 CXX test/cpp_headers/vfio_user_spec.o 00:03:17.337 CXX test/cpp_headers/vhost.o 00:03:17.337 CXX test/cpp_headers/vmd.o 00:03:17.337 CXX test/cpp_headers/xor.o 00:03:17.337 CXX test/cpp_headers/zipf.o 00:03:17.904 LINK cuse 00:03:20.435 LINK esnap 00:03:21.001 ************************************ 00:03:21.001 END TEST make 00:03:21.001 ************************************ 00:03:21.001 00:03:21.001 real 1m6.030s 00:03:21.001 user 6m29.561s 00:03:21.001 sys 1m38.093s 00:03:21.001 12:59:12 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:21.001 12:59:12 make -- common/autotest_common.sh@10 -- $ set +x 00:03:21.001 12:59:12 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:21.001 12:59:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:21.001 12:59:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:21.001 12:59:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.001 12:59:12 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:21.001 12:59:12 -- pm/common@44 -- $ pid=5140 00:03:21.001 12:59:12 -- pm/common@50 -- $ kill -TERM 5140 00:03:21.001 12:59:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.001 12:59:12 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:21.001 12:59:12 -- pm/common@44 -- $ pid=5142 00:03:21.001 12:59:12 -- pm/common@50 -- $ kill -TERM 5142 00:03:21.001 12:59:13 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:21.001 12:59:13 -- nvmf/common.sh@7 -- # uname -s 00:03:21.001 12:59:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:21.001 12:59:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:21.001 12:59:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:21.001 12:59:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:21.001 12:59:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:21.001 12:59:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:21.001 12:59:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:21.001 12:59:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:21.001 12:59:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:21.001 12:59:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:21.001 12:59:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:03:21.001 12:59:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:03:21.001 12:59:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:21.001 12:59:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:21.001 12:59:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:21.001 12:59:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:21.001 12:59:13 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:21.001 12:59:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:21.001 12:59:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:21.001 12:59:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:21.001 12:59:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.001 12:59:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.001 12:59:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.001 12:59:13 -- paths/export.sh@5 -- # export PATH 00:03:21.001 12:59:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.001 12:59:13 -- nvmf/common.sh@47 -- # : 0 00:03:21.001 12:59:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:21.001 12:59:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:21.001 12:59:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:21.002 12:59:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:21.002 12:59:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:21.002 12:59:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:21.002 12:59:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:21.002 12:59:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:21.002 12:59:13 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:21.002 12:59:13 -- spdk/autotest.sh@32 -- # uname -s 00:03:21.002 12:59:13 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:21.002 12:59:13 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:21.002 12:59:13 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:21.002 12:59:13 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:21.002 12:59:13 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:21.002 12:59:13 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:21.002 12:59:13 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:21.002 12:59:13 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:21.002 12:59:13 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:21.002 12:59:13 -- spdk/autotest.sh@48 -- # udevadm_pid=52804 00:03:21.002 12:59:13 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:21.002 12:59:13 -- pm/common@17 -- # local monitor 00:03:21.002 12:59:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.002 12:59:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.002 12:59:13 -- pm/common@25 -- # sleep 1 00:03:21.002 12:59:13 -- pm/common@21 -- # date +%s 00:03:21.002 12:59:13 -- pm/common@21 -- # date +%s 00:03:21.002 12:59:13 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721912353 00:03:21.002 12:59:13 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721912353 00:03:21.002 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721912353_collect-vmstat.pm.log 00:03:21.002 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721912353_collect-cpu-load.pm.log 00:03:22.002 12:59:14 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:22.002 12:59:14 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:22.002 12:59:14 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:22.002 12:59:14 -- common/autotest_common.sh@10 -- # set +x 00:03:22.002 12:59:14 -- spdk/autotest.sh@59 -- # create_test_list 00:03:22.002 12:59:14 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:22.002 12:59:14 -- common/autotest_common.sh@10 -- # set +x 00:03:22.002 12:59:14 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:22.002 12:59:14 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:22.261 12:59:14 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:22.261 12:59:14 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:22.261 12:59:14 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:22.261 12:59:14 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:22.261 12:59:14 -- common/autotest_common.sh@1455 -- # uname 00:03:22.261 12:59:14 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:22.261 12:59:14 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:22.261 12:59:14 -- common/autotest_common.sh@1475 -- # uname 00:03:22.261 12:59:14 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:22.261 12:59:14 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:22.261 12:59:14 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:22.261 12:59:14 -- spdk/autotest.sh@72 -- # hash lcov 00:03:22.261 12:59:14 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:22.261 12:59:14 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:22.261 --rc lcov_branch_coverage=1 00:03:22.261 --rc lcov_function_coverage=1 00:03:22.261 --rc genhtml_branch_coverage=1 00:03:22.261 --rc genhtml_function_coverage=1 00:03:22.261 --rc genhtml_legend=1 00:03:22.261 --rc geninfo_all_blocks=1 00:03:22.261 ' 00:03:22.261 12:59:14 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:22.261 --rc lcov_branch_coverage=1 00:03:22.261 --rc lcov_function_coverage=1 00:03:22.261 --rc genhtml_branch_coverage=1 00:03:22.261 --rc genhtml_function_coverage=1 00:03:22.261 --rc genhtml_legend=1 00:03:22.261 --rc geninfo_all_blocks=1 00:03:22.261 ' 00:03:22.261 12:59:14 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:22.261 --rc lcov_branch_coverage=1 00:03:22.261 --rc lcov_function_coverage=1 00:03:22.261 --rc genhtml_branch_coverage=1 00:03:22.261 --rc genhtml_function_coverage=1 00:03:22.261 --rc genhtml_legend=1 00:03:22.261 --rc geninfo_all_blocks=1 00:03:22.261 --no-external' 00:03:22.261 12:59:14 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:22.261 --rc lcov_branch_coverage=1 00:03:22.261 --rc lcov_function_coverage=1 00:03:22.261 --rc genhtml_branch_coverage=1 00:03:22.261 --rc genhtml_function_coverage=1 00:03:22.261 --rc genhtml_legend=1 00:03:22.261 --rc geninfo_all_blocks=1 00:03:22.261 --no-external' 00:03:22.261 12:59:14 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:22.261 lcov: LCOV version 1.14 00:03:22.261 12:59:14 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:37.135 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:37.135 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:49.356 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:49.356 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:49.357 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:49.357 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:49.358 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:49.358 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:50.776 12:59:42 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:50.776 12:59:42 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:50.776 12:59:42 -- common/autotest_common.sh@10 -- # set +x 00:03:50.776 12:59:42 -- spdk/autotest.sh@91 -- # rm -f 00:03:50.776 12:59:42 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:51.343 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:51.602 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:51.602 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:51.602 12:59:43 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:51.602 12:59:43 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:51.602 12:59:43 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:51.602 12:59:43 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:51.602 12:59:43 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:51.602 12:59:43 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:51.602 12:59:43 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:51.602 12:59:43 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:51.602 12:59:43 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:51.602 12:59:43 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:51.602 12:59:43 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:51.602 12:59:43 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:51.602 12:59:43 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:51.602 12:59:43 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:51.602 12:59:43 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:51.602 12:59:43 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:51.602 12:59:43 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:51.602 12:59:43 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:51.602 12:59:43 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:51.602 12:59:43 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:51.602 12:59:43 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:51.602 12:59:43 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:51.602 12:59:43 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:51.602 12:59:43 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:51.602 12:59:43 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:51.602 12:59:43 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:51.602 12:59:43 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:51.602 12:59:43 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:51.602 12:59:43 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:51.602 12:59:43 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:51.602 No valid GPT data, bailing 00:03:51.602 12:59:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:51.602 12:59:43 -- scripts/common.sh@391 -- # pt= 00:03:51.602 12:59:43 -- scripts/common.sh@392 -- # return 1 00:03:51.602 12:59:43 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:51.602 1+0 records in 00:03:51.602 1+0 records out 00:03:51.602 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00445346 s, 235 MB/s 00:03:51.602 12:59:43 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:51.603 12:59:43 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:51.603 12:59:43 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:51.603 12:59:43 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:51.603 12:59:43 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:51.603 No valid GPT data, bailing 00:03:51.603 12:59:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:51.603 12:59:43 -- scripts/common.sh@391 -- # pt= 00:03:51.603 12:59:43 -- scripts/common.sh@392 -- # return 1 00:03:51.603 12:59:43 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:51.603 1+0 records in 00:03:51.603 1+0 records out 00:03:51.603 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00455774 s, 230 MB/s 00:03:51.603 12:59:43 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:51.603 12:59:43 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:51.603 12:59:43 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:03:51.603 12:59:43 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:03:51.603 12:59:43 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:51.862 No valid GPT data, bailing 00:03:51.862 12:59:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:51.862 12:59:43 -- scripts/common.sh@391 -- # pt= 00:03:51.862 12:59:43 -- scripts/common.sh@392 -- # return 1 00:03:51.862 12:59:43 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:51.862 1+0 records in 00:03:51.862 1+0 records out 00:03:51.862 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00338873 s, 309 MB/s 00:03:51.862 12:59:43 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:51.862 12:59:43 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:51.862 12:59:43 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:03:51.862 12:59:43 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:03:51.862 12:59:43 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:51.862 No valid GPT data, bailing 00:03:51.862 12:59:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:51.862 12:59:43 -- scripts/common.sh@391 -- # pt= 00:03:51.862 12:59:43 -- scripts/common.sh@392 -- # return 1 00:03:51.862 12:59:43 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:51.862 1+0 records in 00:03:51.862 1+0 records out 00:03:51.862 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00485087 s, 216 MB/s 00:03:51.862 12:59:43 -- spdk/autotest.sh@118 -- # sync 00:03:51.862 12:59:44 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:51.862 12:59:44 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:51.862 12:59:44 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:53.765 12:59:45 -- spdk/autotest.sh@124 -- # uname -s 00:03:53.765 12:59:45 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:53.765 12:59:45 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:53.765 12:59:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:53.765 12:59:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:53.765 12:59:45 -- common/autotest_common.sh@10 -- # set +x 00:03:53.765 ************************************ 00:03:53.765 START TEST setup.sh 00:03:53.765 ************************************ 00:03:53.765 12:59:45 setup.sh -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:53.765 * Looking for test storage... 00:03:53.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:53.765 12:59:45 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:53.765 12:59:45 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:53.765 12:59:45 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:53.765 12:59:45 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:53.765 12:59:45 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:53.765 12:59:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:53.765 ************************************ 00:03:53.765 START TEST acl 00:03:53.765 ************************************ 00:03:53.765 12:59:45 setup.sh.acl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:54.025 * Looking for test storage... 00:03:54.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:54.025 12:59:46 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:54.025 12:59:46 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:54.025 12:59:46 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:54.025 12:59:46 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:54.025 12:59:46 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:54.025 12:59:46 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:54.025 12:59:46 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:54.025 12:59:46 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:54.025 12:59:46 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:54.025 12:59:46 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:54.025 12:59:46 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:54.025 12:59:46 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:54.025 12:59:46 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:54.025 12:59:46 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:54.025 12:59:46 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:54.025 12:59:46 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:54.025 12:59:46 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:54.025 12:59:46 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:54.025 12:59:46 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:54.025 12:59:46 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:54.025 12:59:46 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:54.025 12:59:46 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:54.025 12:59:46 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:54.025 12:59:46 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:54.025 12:59:46 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:54.025 12:59:46 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:54.025 12:59:46 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:54.025 12:59:46 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:54.025 12:59:46 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:54.025 12:59:46 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:54.025 12:59:46 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:54.593 12:59:46 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:54.593 12:59:46 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:54.593 12:59:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.593 12:59:46 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:54.593 12:59:46 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.593 12:59:46 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:55.528 12:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:03:55.528 12:59:47 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:55.528 12:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:55.528 Hugepages 00:03:55.528 node hugesize free / total 00:03:55.528 12:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:55.528 12:59:47 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:55.528 12:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:55.528 00:03:55.528 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:55.528 12:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:55.528 12:59:47 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:55.528 12:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:55.528 12:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:55.528 12:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:55.528 12:59:47 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:55.528 12:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:55.528 12:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:03:55.528 12:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:55.528 12:59:47 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:55.528 12:59:47 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:55.528 12:59:47 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:55.528 12:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:55.528 12:59:47 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:03:55.528 12:59:47 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:55.528 12:59:47 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:55.528 12:59:47 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:55.528 12:59:47 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:55.528 12:59:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:55.528 12:59:47 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:55.528 12:59:47 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:55.528 12:59:47 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:55.528 12:59:47 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:55.528 12:59:47 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:55.528 ************************************ 00:03:55.528 START TEST denied 00:03:55.528 ************************************ 00:03:55.528 12:59:47 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:55.528 12:59:47 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:03:55.528 12:59:47 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:55.529 12:59:47 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:03:55.529 12:59:47 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.529 12:59:47 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:56.463 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:03:56.463 12:59:48 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:03:56.463 12:59:48 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:56.463 12:59:48 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:56.463 12:59:48 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:03:56.463 12:59:48 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:03:56.463 12:59:48 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:56.463 12:59:48 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:56.463 12:59:48 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:56.463 12:59:48 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:56.463 12:59:48 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:57.031 ************************************ 00:03:57.031 END TEST denied 00:03:57.031 ************************************ 00:03:57.031 00:03:57.031 real 0m1.452s 00:03:57.031 user 0m0.576s 00:03:57.031 sys 0m0.782s 00:03:57.031 12:59:49 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:57.031 12:59:49 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:57.031 12:59:49 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:57.031 12:59:49 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:57.031 12:59:49 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:57.031 12:59:49 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:57.031 ************************************ 00:03:57.031 START TEST allowed 00:03:57.031 ************************************ 00:03:57.031 12:59:49 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:57.031 12:59:49 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:03:57.031 12:59:49 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:57.031 12:59:49 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:03:57.031 12:59:49 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.031 12:59:49 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:57.966 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:57.966 12:59:49 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:03:57.966 12:59:49 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:57.966 12:59:49 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:57.966 12:59:49 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:03:57.966 12:59:49 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:03:57.966 12:59:49 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:57.966 12:59:49 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:57.966 12:59:49 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:57.967 12:59:49 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:57.967 12:59:49 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:58.533 00:03:58.533 real 0m1.538s 00:03:58.533 user 0m0.674s 00:03:58.533 sys 0m0.837s 00:03:58.533 12:59:50 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:58.533 ************************************ 00:03:58.533 END TEST allowed 00:03:58.533 12:59:50 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:58.533 ************************************ 00:03:58.533 ************************************ 00:03:58.533 END TEST acl 00:03:58.533 ************************************ 00:03:58.533 00:03:58.533 real 0m4.789s 00:03:58.533 user 0m2.078s 00:03:58.533 sys 0m2.594s 00:03:58.533 12:59:50 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:58.533 12:59:50 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:58.792 12:59:50 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:58.792 12:59:50 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:58.792 12:59:50 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:58.792 12:59:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:58.792 ************************************ 00:03:58.792 START TEST hugepages 00:03:58.792 ************************************ 00:03:58.792 12:59:50 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:58.792 * Looking for test storage... 00:03:58.792 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 5998292 kB' 'MemAvailable: 7396104 kB' 'Buffers: 2436 kB' 'Cached: 1612052 kB' 'SwapCached: 0 kB' 'Active: 435808 kB' 'Inactive: 1283140 kB' 'Active(anon): 114948 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 106152 kB' 'Mapped: 48644 kB' 'Shmem: 10488 kB' 'KReclaimable: 61500 kB' 'Slab: 133708 kB' 'SReclaimable: 61500 kB' 'SUnreclaim: 72208 kB' 'KernelStack: 6372 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 334908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.792 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.793 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:58.794 12:59:50 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:58.794 12:59:50 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:58.794 12:59:50 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:58.794 12:59:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:58.794 ************************************ 00:03:58.794 START TEST default_setup 00:03:58.794 ************************************ 00:03:58.794 12:59:50 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:03:58.794 12:59:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:58.794 12:59:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:58.794 12:59:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:58.794 12:59:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:58.794 12:59:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:58.794 12:59:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:58.794 12:59:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:58.794 12:59:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:58.794 12:59:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:58.794 12:59:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:58.794 12:59:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.794 12:59:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:58.794 12:59:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:58.794 12:59:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.794 12:59:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.794 12:59:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:58.794 12:59:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:58.794 12:59:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:58.794 12:59:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:58.794 12:59:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:58.794 12:59:50 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.794 12:59:50 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:59.361 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:59.621 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:59.621 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8086908 kB' 'MemAvailable: 9484716 kB' 'Buffers: 2436 kB' 'Cached: 1612044 kB' 'SwapCached: 0 kB' 'Active: 452812 kB' 'Inactive: 1283144 kB' 'Active(anon): 131952 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283144 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123132 kB' 'Mapped: 48648 kB' 'Shmem: 10472 kB' 'KReclaimable: 61488 kB' 'Slab: 133824 kB' 'SReclaimable: 61488 kB' 'SUnreclaim: 72336 kB' 'KernelStack: 6380 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.621 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8086908 kB' 'MemAvailable: 9484612 kB' 'Buffers: 2436 kB' 'Cached: 1612040 kB' 'SwapCached: 0 kB' 'Active: 452968 kB' 'Inactive: 1283148 kB' 'Active(anon): 132108 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283148 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123280 kB' 'Mapped: 48528 kB' 'Shmem: 10464 kB' 'KReclaimable: 61272 kB' 'Slab: 133576 kB' 'SReclaimable: 61272 kB' 'SUnreclaim: 72304 kB' 'KernelStack: 6372 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.622 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.623 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.624 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.886 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8088468 kB' 'MemAvailable: 9486172 kB' 'Buffers: 2436 kB' 'Cached: 1612040 kB' 'SwapCached: 0 kB' 'Active: 452700 kB' 'Inactive: 1283148 kB' 'Active(anon): 131840 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283148 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122964 kB' 'Mapped: 48528 kB' 'Shmem: 10464 kB' 'KReclaimable: 61272 kB' 'Slab: 133576 kB' 'SReclaimable: 61272 kB' 'SUnreclaim: 72304 kB' 'KernelStack: 6388 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.887 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:59.888 nr_hugepages=1024 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:59.888 resv_hugepages=0 00:03:59.888 surplus_hugepages=0 00:03:59.888 anon_hugepages=0 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.888 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8088480 kB' 'MemAvailable: 9486184 kB' 'Buffers: 2436 kB' 'Cached: 1612040 kB' 'SwapCached: 0 kB' 'Active: 452812 kB' 'Inactive: 1283148 kB' 'Active(anon): 131952 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283148 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123096 kB' 'Mapped: 48528 kB' 'Shmem: 10464 kB' 'KReclaimable: 61272 kB' 'Slab: 133572 kB' 'SReclaimable: 61272 kB' 'SUnreclaim: 72300 kB' 'KernelStack: 6388 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.889 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8088804 kB' 'MemUsed: 4153176 kB' 'SwapCached: 0 kB' 'Active: 452608 kB' 'Inactive: 1283148 kB' 'Active(anon): 131748 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283148 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1614476 kB' 'Mapped: 48588 kB' 'AnonPages: 122860 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61272 kB' 'Slab: 133560 kB' 'SReclaimable: 61272 kB' 'SUnreclaim: 72288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.890 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.891 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:59.892 node0=1024 expecting 1024 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:59.892 00:03:59.892 real 0m1.012s 00:03:59.892 user 0m0.471s 00:03:59.892 sys 0m0.454s 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:59.892 ************************************ 00:03:59.892 END TEST default_setup 00:03:59.892 ************************************ 00:03:59.892 12:59:51 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:59.892 12:59:51 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:59.892 12:59:51 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:59.892 12:59:51 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:59.892 12:59:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:59.892 ************************************ 00:03:59.892 START TEST per_node_1G_alloc 00:03:59.892 ************************************ 00:03:59.892 12:59:51 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:03:59.892 12:59:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:59.892 12:59:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:59.892 12:59:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:59.892 12:59:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:59.892 12:59:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:59.892 12:59:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:59.892 12:59:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:59.892 12:59:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.892 12:59:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:59.892 12:59:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:59.892 12:59:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:59.892 12:59:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.892 12:59:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:59.892 12:59:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:59.892 12:59:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.892 12:59:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.892 12:59:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:59.892 12:59:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:59.892 12:59:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:59.892 12:59:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:59.892 12:59:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:59.892 12:59:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:59.892 12:59:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:59.892 12:59:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.892 12:59:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:00.151 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.151 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:00.151 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:00.414 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:00.414 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:00.414 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:00.414 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:00.414 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:00.414 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:00.414 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:00.414 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:00.414 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.414 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:00.414 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.414 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.414 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.414 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.414 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.414 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.414 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.414 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.414 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.414 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.414 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.414 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9140340 kB' 'MemAvailable: 10538040 kB' 'Buffers: 2436 kB' 'Cached: 1612040 kB' 'SwapCached: 0 kB' 'Active: 452976 kB' 'Inactive: 1283152 kB' 'Active(anon): 132116 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123248 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 61256 kB' 'Slab: 133532 kB' 'SReclaimable: 61256 kB' 'SUnreclaim: 72276 kB' 'KernelStack: 6392 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.415 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9140340 kB' 'MemAvailable: 10538040 kB' 'Buffers: 2436 kB' 'Cached: 1612040 kB' 'SwapCached: 0 kB' 'Active: 452952 kB' 'Inactive: 1283152 kB' 'Active(anon): 132092 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123224 kB' 'Mapped: 48588 kB' 'Shmem: 10464 kB' 'KReclaimable: 61256 kB' 'Slab: 133524 kB' 'SReclaimable: 61256 kB' 'SUnreclaim: 72268 kB' 'KernelStack: 6368 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.416 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.417 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9140340 kB' 'MemAvailable: 10538040 kB' 'Buffers: 2436 kB' 'Cached: 1612040 kB' 'SwapCached: 0 kB' 'Active: 452704 kB' 'Inactive: 1283152 kB' 'Active(anon): 131844 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122956 kB' 'Mapped: 48588 kB' 'Shmem: 10464 kB' 'KReclaimable: 61256 kB' 'Slab: 133516 kB' 'SReclaimable: 61256 kB' 'SUnreclaim: 72260 kB' 'KernelStack: 6336 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.418 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.419 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:00.420 nr_hugepages=512 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:00.420 resv_hugepages=0 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:00.420 surplus_hugepages=0 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:00.420 anon_hugepages=0 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9140692 kB' 'MemAvailable: 10538392 kB' 'Buffers: 2436 kB' 'Cached: 1612040 kB' 'SwapCached: 0 kB' 'Active: 452432 kB' 'Inactive: 1283152 kB' 'Active(anon): 131572 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122704 kB' 'Mapped: 48588 kB' 'Shmem: 10464 kB' 'KReclaimable: 61256 kB' 'Slab: 133512 kB' 'SReclaimable: 61256 kB' 'SUnreclaim: 72256 kB' 'KernelStack: 6304 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.420 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.421 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9140692 kB' 'MemUsed: 3101288 kB' 'SwapCached: 0 kB' 'Active: 452432 kB' 'Inactive: 1283152 kB' 'Active(anon): 131572 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 1614476 kB' 'Mapped: 48588 kB' 'AnonPages: 122704 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61256 kB' 'Slab: 133512 kB' 'SReclaimable: 61256 kB' 'SUnreclaim: 72256 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.422 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.423 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.424 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.424 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.424 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.424 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.424 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.424 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.424 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.424 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.424 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.424 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.424 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.424 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.424 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.424 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.424 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.424 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.424 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.424 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.424 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.424 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.424 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.424 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.424 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.424 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.424 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.424 node0=512 expecting 512 00:04:00.424 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:00.424 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:00.424 00:04:00.424 real 0m0.513s 00:04:00.424 user 0m0.272s 00:04:00.424 sys 0m0.276s 00:04:00.424 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:00.424 12:59:52 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:00.424 ************************************ 00:04:00.424 END TEST per_node_1G_alloc 00:04:00.424 ************************************ 00:04:00.424 12:59:52 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:00.424 12:59:52 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:00.424 12:59:52 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:00.424 12:59:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:00.424 ************************************ 00:04:00.424 START TEST even_2G_alloc 00:04:00.424 ************************************ 00:04:00.424 12:59:52 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:04:00.424 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:00.424 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:00.424 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:00.424 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:00.424 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:00.424 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:00.424 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:00.424 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.424 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:00.424 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:00.424 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.424 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.424 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:00.424 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:00.424 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.424 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:00.424 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:00.424 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:00.424 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.424 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:00.424 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:00.424 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:00.424 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.424 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:00.682 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.946 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:00.946 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8094672 kB' 'MemAvailable: 9492372 kB' 'Buffers: 2436 kB' 'Cached: 1612040 kB' 'SwapCached: 0 kB' 'Active: 452912 kB' 'Inactive: 1283152 kB' 'Active(anon): 132052 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123208 kB' 'Mapped: 48684 kB' 'Shmem: 10464 kB' 'KReclaimable: 61256 kB' 'Slab: 133552 kB' 'SReclaimable: 61256 kB' 'SUnreclaim: 72296 kB' 'KernelStack: 6392 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.946 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.947 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8094672 kB' 'MemAvailable: 9492372 kB' 'Buffers: 2436 kB' 'Cached: 1612040 kB' 'SwapCached: 0 kB' 'Active: 452620 kB' 'Inactive: 1283152 kB' 'Active(anon): 131760 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122872 kB' 'Mapped: 48684 kB' 'Shmem: 10464 kB' 'KReclaimable: 61256 kB' 'Slab: 133552 kB' 'SReclaimable: 61256 kB' 'SUnreclaim: 72296 kB' 'KernelStack: 6360 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.948 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.949 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8094672 kB' 'MemAvailable: 9492372 kB' 'Buffers: 2436 kB' 'Cached: 1612040 kB' 'SwapCached: 0 kB' 'Active: 452836 kB' 'Inactive: 1283152 kB' 'Active(anon): 131976 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123136 kB' 'Mapped: 48592 kB' 'Shmem: 10464 kB' 'KReclaimable: 61256 kB' 'Slab: 133568 kB' 'SReclaimable: 61256 kB' 'SUnreclaim: 72312 kB' 'KernelStack: 6384 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.950 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:00.951 nr_hugepages=1024 00:04:00.951 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:00.951 resv_hugepages=0 00:04:00.952 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:00.952 surplus_hugepages=0 00:04:00.952 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:00.952 anon_hugepages=0 00:04:00.952 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:00.952 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.952 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:00.952 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:00.952 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.952 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.952 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.952 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.952 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.952 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.952 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.952 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.952 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.952 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8094928 kB' 'MemAvailable: 9492624 kB' 'Buffers: 2436 kB' 'Cached: 1612036 kB' 'SwapCached: 0 kB' 'Active: 452664 kB' 'Inactive: 1283148 kB' 'Active(anon): 131804 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283148 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122948 kB' 'Mapped: 48592 kB' 'Shmem: 10464 kB' 'KReclaimable: 61256 kB' 'Slab: 133552 kB' 'SReclaimable: 61256 kB' 'SUnreclaim: 72296 kB' 'KernelStack: 6320 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:00.952 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.952 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.952 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.952 12:59:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.952 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:00.953 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8094928 kB' 'MemUsed: 4147052 kB' 'SwapCached: 0 kB' 'Active: 452360 kB' 'Inactive: 1283152 kB' 'Active(anon): 131500 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1614476 kB' 'Mapped: 48592 kB' 'AnonPages: 122740 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61256 kB' 'Slab: 133548 kB' 'SReclaimable: 61256 kB' 'SUnreclaim: 72292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.955 node0=1024 expecting 1024 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:00.955 00:04:00.955 real 0m0.515s 00:04:00.955 user 0m0.246s 00:04:00.955 sys 0m0.304s 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:00.955 12:59:53 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:00.955 ************************************ 00:04:00.955 END TEST even_2G_alloc 00:04:00.955 ************************************ 00:04:00.955 12:59:53 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:00.955 12:59:53 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:00.955 12:59:53 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:00.955 12:59:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:00.955 ************************************ 00:04:00.955 START TEST odd_alloc 00:04:00.955 ************************************ 00:04:00.955 12:59:53 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:04:00.955 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:00.955 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:00.955 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:00.955 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:00.955 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:00.955 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:00.955 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:00.955 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.955 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:00.955 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:00.955 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.955 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.955 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:00.955 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:00.955 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.955 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:00.955 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:00.955 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:00.955 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.955 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:00.955 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:00.955 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:00.955 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.955 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:01.525 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.525 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:01.525 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8089268 kB' 'MemAvailable: 9486972 kB' 'Buffers: 2436 kB' 'Cached: 1612044 kB' 'SwapCached: 0 kB' 'Active: 452616 kB' 'Inactive: 1283156 kB' 'Active(anon): 131756 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122980 kB' 'Mapped: 48804 kB' 'Shmem: 10464 kB' 'KReclaimable: 61256 kB' 'Slab: 133532 kB' 'SReclaimable: 61256 kB' 'SUnreclaim: 72276 kB' 'KernelStack: 6344 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.525 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.526 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8089268 kB' 'MemAvailable: 9486972 kB' 'Buffers: 2436 kB' 'Cached: 1612044 kB' 'SwapCached: 0 kB' 'Active: 452760 kB' 'Inactive: 1283156 kB' 'Active(anon): 131900 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123084 kB' 'Mapped: 48708 kB' 'Shmem: 10464 kB' 'KReclaimable: 61256 kB' 'Slab: 133660 kB' 'SReclaimable: 61256 kB' 'SUnreclaim: 72404 kB' 'KernelStack: 6320 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.527 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.528 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8089532 kB' 'MemAvailable: 9487236 kB' 'Buffers: 2436 kB' 'Cached: 1612044 kB' 'SwapCached: 0 kB' 'Active: 452616 kB' 'Inactive: 1283156 kB' 'Active(anon): 131756 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122936 kB' 'Mapped: 48592 kB' 'Shmem: 10464 kB' 'KReclaimable: 61256 kB' 'Slab: 133636 kB' 'SReclaimable: 61256 kB' 'SUnreclaim: 72380 kB' 'KernelStack: 6336 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.529 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:01.530 nr_hugepages=1025 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:01.530 resv_hugepages=0 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:01.530 surplus_hugepages=0 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:01.530 anon_hugepages=0 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.530 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8089484 kB' 'MemAvailable: 9487188 kB' 'Buffers: 2436 kB' 'Cached: 1612044 kB' 'SwapCached: 0 kB' 'Active: 452404 kB' 'Inactive: 1283156 kB' 'Active(anon): 131544 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122768 kB' 'Mapped: 48592 kB' 'Shmem: 10464 kB' 'KReclaimable: 61256 kB' 'Slab: 133636 kB' 'SReclaimable: 61256 kB' 'SUnreclaim: 72380 kB' 'KernelStack: 6352 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.531 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8089484 kB' 'MemUsed: 4152496 kB' 'SwapCached: 0 kB' 'Active: 452664 kB' 'Inactive: 1283156 kB' 'Active(anon): 131804 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1614480 kB' 'Mapped: 48592 kB' 'AnonPages: 123028 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61256 kB' 'Slab: 133636 kB' 'SReclaimable: 61256 kB' 'SUnreclaim: 72380 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:01.532 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.533 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.534 node0=1025 expecting 1025 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:01.534 00:04:01.534 real 0m0.535s 00:04:01.534 user 0m0.236s 00:04:01.534 sys 0m0.303s 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:01.534 12:59:53 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:01.534 ************************************ 00:04:01.534 END TEST odd_alloc 00:04:01.534 ************************************ 00:04:01.534 12:59:53 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:01.534 12:59:53 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:01.534 12:59:53 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:01.534 12:59:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:01.534 ************************************ 00:04:01.534 START TEST custom_alloc 00:04:01.534 ************************************ 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.534 12:59:53 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:02.106 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.106 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.106 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.106 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:02.106 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:02.106 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:02.106 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.106 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.106 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:02.106 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:02.106 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:02.106 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.106 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.106 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.106 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:02.106 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.106 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.106 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.106 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.106 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.106 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.106 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.106 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.106 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9143832 kB' 'MemAvailable: 10541536 kB' 'Buffers: 2436 kB' 'Cached: 1612044 kB' 'SwapCached: 0 kB' 'Active: 453196 kB' 'Inactive: 1283156 kB' 'Active(anon): 132336 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123452 kB' 'Mapped: 48708 kB' 'Shmem: 10464 kB' 'KReclaimable: 61256 kB' 'Slab: 133672 kB' 'SReclaimable: 61256 kB' 'SUnreclaim: 72416 kB' 'KernelStack: 6356 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:02.106 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.106 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.106 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.106 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.107 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9143832 kB' 'MemAvailable: 10541536 kB' 'Buffers: 2436 kB' 'Cached: 1612044 kB' 'SwapCached: 0 kB' 'Active: 452520 kB' 'Inactive: 1283156 kB' 'Active(anon): 131660 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122816 kB' 'Mapped: 48592 kB' 'Shmem: 10464 kB' 'KReclaimable: 61256 kB' 'Slab: 133680 kB' 'SReclaimable: 61256 kB' 'SUnreclaim: 72424 kB' 'KernelStack: 6368 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.108 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.109 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9143832 kB' 'MemAvailable: 10541536 kB' 'Buffers: 2436 kB' 'Cached: 1612044 kB' 'SwapCached: 0 kB' 'Active: 452520 kB' 'Inactive: 1283156 kB' 'Active(anon): 131660 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123076 kB' 'Mapped: 48592 kB' 'Shmem: 10464 kB' 'KReclaimable: 61256 kB' 'Slab: 133680 kB' 'SReclaimable: 61256 kB' 'SUnreclaim: 72424 kB' 'KernelStack: 6368 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.110 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.111 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:02.112 nr_hugepages=512 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:02.112 resv_hugepages=0 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.112 surplus_hugepages=0 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.112 anon_hugepages=0 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9143832 kB' 'MemAvailable: 10541536 kB' 'Buffers: 2436 kB' 'Cached: 1612044 kB' 'SwapCached: 0 kB' 'Active: 452412 kB' 'Inactive: 1283156 kB' 'Active(anon): 131552 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122700 kB' 'Mapped: 48592 kB' 'Shmem: 10464 kB' 'KReclaimable: 61256 kB' 'Slab: 133656 kB' 'SReclaimable: 61256 kB' 'SUnreclaim: 72400 kB' 'KernelStack: 6320 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.112 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.113 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9143832 kB' 'MemUsed: 3098148 kB' 'SwapCached: 0 kB' 'Active: 452672 kB' 'Inactive: 1283156 kB' 'Active(anon): 131812 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1614480 kB' 'Mapped: 48592 kB' 'AnonPages: 122960 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61256 kB' 'Slab: 133636 kB' 'SReclaimable: 61256 kB' 'SUnreclaim: 72380 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.114 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.115 node0=512 expecting 512 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:02.115 12:59:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:02.115 00:04:02.116 real 0m0.517s 00:04:02.116 user 0m0.268s 00:04:02.116 sys 0m0.285s 00:04:02.116 12:59:54 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:02.116 12:59:54 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:02.116 ************************************ 00:04:02.116 END TEST custom_alloc 00:04:02.116 ************************************ 00:04:02.116 12:59:54 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:02.116 12:59:54 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:02.116 12:59:54 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:02.116 12:59:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:02.116 ************************************ 00:04:02.116 START TEST no_shrink_alloc 00:04:02.116 ************************************ 00:04:02.116 12:59:54 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:02.116 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:02.116 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:02.116 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:02.116 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:02.116 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:02.116 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:02.116 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.116 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:02.116 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:02.116 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:02.116 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.116 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:02.116 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:02.116 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.116 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.116 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:02.116 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:02.116 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:02.116 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:02.116 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:02.116 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.116 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:02.688 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.688 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.688 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.688 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:02.688 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:02.688 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.688 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.688 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:02.688 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:02.688 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:02.688 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.688 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.688 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.688 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.688 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.688 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.688 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.688 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.688 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.688 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.688 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.688 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.688 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8089084 kB' 'MemAvailable: 9486788 kB' 'Buffers: 2436 kB' 'Cached: 1612044 kB' 'SwapCached: 0 kB' 'Active: 453160 kB' 'Inactive: 1283156 kB' 'Active(anon): 132300 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123480 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 61256 kB' 'Slab: 133620 kB' 'SReclaimable: 61256 kB' 'SUnreclaim: 72364 kB' 'KernelStack: 6360 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.689 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8089684 kB' 'MemAvailable: 9487388 kB' 'Buffers: 2436 kB' 'Cached: 1612044 kB' 'SwapCached: 0 kB' 'Active: 452692 kB' 'Inactive: 1283156 kB' 'Active(anon): 131832 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123012 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 61256 kB' 'Slab: 133620 kB' 'SReclaimable: 61256 kB' 'SUnreclaim: 72364 kB' 'KernelStack: 6328 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.690 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.691 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8089684 kB' 'MemAvailable: 9487388 kB' 'Buffers: 2436 kB' 'Cached: 1612044 kB' 'SwapCached: 0 kB' 'Active: 452460 kB' 'Inactive: 1283156 kB' 'Active(anon): 131600 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122744 kB' 'Mapped: 48592 kB' 'Shmem: 10464 kB' 'KReclaimable: 61256 kB' 'Slab: 133636 kB' 'SReclaimable: 61256 kB' 'SUnreclaim: 72380 kB' 'KernelStack: 6320 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.692 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.693 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:02.694 nr_hugepages=1024 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:02.694 resv_hugepages=0 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.694 surplus_hugepages=0 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.694 anon_hugepages=0 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8089684 kB' 'MemAvailable: 9487388 kB' 'Buffers: 2436 kB' 'Cached: 1612044 kB' 'SwapCached: 0 kB' 'Active: 452720 kB' 'Inactive: 1283156 kB' 'Active(anon): 131860 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123004 kB' 'Mapped: 48592 kB' 'Shmem: 10464 kB' 'KReclaimable: 61256 kB' 'Slab: 133636 kB' 'SReclaimable: 61256 kB' 'SUnreclaim: 72380 kB' 'KernelStack: 6320 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.694 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.695 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8089684 kB' 'MemUsed: 4152296 kB' 'SwapCached: 0 kB' 'Active: 452660 kB' 'Inactive: 1283156 kB' 'Active(anon): 131800 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1614480 kB' 'Mapped: 48592 kB' 'AnonPages: 122928 kB' 'Shmem: 10464 kB' 'KernelStack: 6356 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61256 kB' 'Slab: 133628 kB' 'SReclaimable: 61256 kB' 'SUnreclaim: 72372 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.696 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.697 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.698 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.698 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.698 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.698 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.698 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.698 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.698 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.698 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.698 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.698 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.698 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.698 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.698 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.698 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.698 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.698 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.698 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.698 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.698 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.698 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.698 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.698 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.698 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:02.698 node0=1024 expecting 1024 00:04:02.698 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:02.698 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:02.698 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:02.698 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:02.698 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.698 12:59:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:02.958 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.958 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.958 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.958 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8093068 kB' 'MemAvailable: 9490772 kB' 'Buffers: 2436 kB' 'Cached: 1612044 kB' 'SwapCached: 0 kB' 'Active: 448756 kB' 'Inactive: 1283156 kB' 'Active(anon): 127896 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 119064 kB' 'Mapped: 48232 kB' 'Shmem: 10464 kB' 'KReclaimable: 61256 kB' 'Slab: 133552 kB' 'SReclaimable: 61256 kB' 'SUnreclaim: 72296 kB' 'KernelStack: 6340 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.958 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.959 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.960 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.960 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8093068 kB' 'MemAvailable: 9490772 kB' 'Buffers: 2436 kB' 'Cached: 1612044 kB' 'SwapCached: 0 kB' 'Active: 448448 kB' 'Inactive: 1283156 kB' 'Active(anon): 127588 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118700 kB' 'Mapped: 47852 kB' 'Shmem: 10464 kB' 'KReclaimable: 61256 kB' 'Slab: 133548 kB' 'SReclaimable: 61256 kB' 'SUnreclaim: 72292 kB' 'KernelStack: 6240 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:02.960 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.960 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.960 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.960 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.960 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.960 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.960 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.960 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.960 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.960 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.222 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.223 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8092816 kB' 'MemAvailable: 9490520 kB' 'Buffers: 2436 kB' 'Cached: 1612044 kB' 'SwapCached: 0 kB' 'Active: 448240 kB' 'Inactive: 1283156 kB' 'Active(anon): 127380 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118520 kB' 'Mapped: 47852 kB' 'Shmem: 10464 kB' 'KReclaimable: 61256 kB' 'Slab: 133548 kB' 'SReclaimable: 61256 kB' 'SUnreclaim: 72292 kB' 'KernelStack: 6256 kB' 'PageTables: 3776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.224 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.225 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:03.226 nr_hugepages=1024 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:03.226 resv_hugepages=0 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.226 surplus_hugepages=0 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.226 anon_hugepages=0 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.226 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8092816 kB' 'MemAvailable: 9490520 kB' 'Buffers: 2436 kB' 'Cached: 1612044 kB' 'SwapCached: 0 kB' 'Active: 448180 kB' 'Inactive: 1283156 kB' 'Active(anon): 127320 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118424 kB' 'Mapped: 47852 kB' 'Shmem: 10464 kB' 'KReclaimable: 61256 kB' 'Slab: 133548 kB' 'SReclaimable: 61256 kB' 'SUnreclaim: 72292 kB' 'KernelStack: 6240 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.227 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.228 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8092816 kB' 'MemUsed: 4149164 kB' 'SwapCached: 0 kB' 'Active: 448220 kB' 'Inactive: 1283156 kB' 'Active(anon): 127360 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1283156 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1614480 kB' 'Mapped: 47852 kB' 'AnonPages: 118516 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 3776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61256 kB' 'Slab: 133548 kB' 'SReclaimable: 61256 kB' 'SUnreclaim: 72292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.229 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.230 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.231 node0=1024 expecting 1024 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:03.231 00:04:03.231 real 0m1.036s 00:04:03.231 user 0m0.505s 00:04:03.231 sys 0m0.573s 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.231 12:59:55 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:03.231 ************************************ 00:04:03.231 END TEST no_shrink_alloc 00:04:03.231 ************************************ 00:04:03.231 12:59:55 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:03.231 12:59:55 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:03.231 12:59:55 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:03.231 12:59:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:03.231 12:59:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:03.231 12:59:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:03.231 12:59:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:03.231 12:59:55 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:03.231 12:59:55 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:03.231 00:04:03.231 real 0m4.569s 00:04:03.231 user 0m2.156s 00:04:03.231 sys 0m2.448s 00:04:03.231 12:59:55 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.231 12:59:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:03.231 ************************************ 00:04:03.231 END TEST hugepages 00:04:03.231 ************************************ 00:04:03.231 12:59:55 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:03.231 12:59:55 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.231 12:59:55 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.231 12:59:55 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:03.231 ************************************ 00:04:03.231 START TEST driver 00:04:03.231 ************************************ 00:04:03.231 12:59:55 setup.sh.driver -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:03.489 * Looking for test storage... 00:04:03.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:03.489 12:59:55 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:03.489 12:59:55 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:03.489 12:59:55 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:04.056 12:59:56 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:04.056 12:59:56 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:04.056 12:59:56 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.056 12:59:56 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:04.056 ************************************ 00:04:04.056 START TEST guess_driver 00:04:04.056 ************************************ 00:04:04.056 12:59:56 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:04.056 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:04.056 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:04.056 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:04.056 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:04.056 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:04.056 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:04.056 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:04.056 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:04.056 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:04.056 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:04.056 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:04.056 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:04.056 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:04.056 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:04.056 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:04.056 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:04.056 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:04.056 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:04.056 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:04.056 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:04.056 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:04.056 Looking for driver=uio_pci_generic 00:04:04.056 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:04.056 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.057 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:04.057 12:59:56 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.057 12:59:56 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:04.623 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:04.623 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:04.623 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.623 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.623 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:04.623 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.881 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.882 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:04.882 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.882 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:04.882 12:59:56 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:04.882 12:59:56 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:04.882 12:59:56 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:05.448 00:04:05.448 real 0m1.390s 00:04:05.448 user 0m0.536s 00:04:05.448 sys 0m0.868s 00:04:05.448 12:59:57 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.449 12:59:57 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:05.449 ************************************ 00:04:05.449 END TEST guess_driver 00:04:05.449 ************************************ 00:04:05.449 ************************************ 00:04:05.449 END TEST driver 00:04:05.449 ************************************ 00:04:05.449 00:04:05.449 real 0m2.082s 00:04:05.449 user 0m0.760s 00:04:05.449 sys 0m1.389s 00:04:05.449 12:59:57 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.449 12:59:57 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:05.449 12:59:57 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:05.449 12:59:57 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.449 12:59:57 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.449 12:59:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:05.449 ************************************ 00:04:05.449 START TEST devices 00:04:05.449 ************************************ 00:04:05.449 12:59:57 setup.sh.devices -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:05.449 * Looking for test storage... 00:04:05.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:05.449 12:59:57 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:05.449 12:59:57 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:05.449 12:59:57 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:05.449 12:59:57 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:06.385 12:59:58 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:06.385 12:59:58 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:06.385 12:59:58 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:06.385 12:59:58 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:06.385 12:59:58 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:06.385 12:59:58 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:06.385 12:59:58 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:06.385 12:59:58 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:06.385 12:59:58 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:06.385 12:59:58 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:06.385 12:59:58 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:04:06.385 12:59:58 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:04:06.385 12:59:58 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:06.385 12:59:58 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:06.385 12:59:58 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:06.385 12:59:58 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:04:06.385 12:59:58 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:04:06.385 12:59:58 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:06.385 12:59:58 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:06.385 12:59:58 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:06.385 12:59:58 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:06.385 12:59:58 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:06.385 12:59:58 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:06.385 12:59:58 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:06.385 12:59:58 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:06.385 12:59:58 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:06.385 12:59:58 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:06.385 12:59:58 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:06.385 12:59:58 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:06.385 12:59:58 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:06.385 12:59:58 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:06.385 12:59:58 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:06.385 12:59:58 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:06.385 12:59:58 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:06.385 12:59:58 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:06.385 12:59:58 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:06.385 12:59:58 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:06.385 No valid GPT data, bailing 00:04:06.385 12:59:58 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:06.385 12:59:58 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:06.385 12:59:58 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:06.385 12:59:58 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:06.385 12:59:58 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:06.386 12:59:58 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:06.386 12:59:58 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:06.386 12:59:58 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:06.386 12:59:58 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:06.386 12:59:58 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:06.386 12:59:58 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:06.386 12:59:58 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:06.386 12:59:58 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:06.386 12:59:58 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:06.386 12:59:58 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:06.386 12:59:58 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:06.386 12:59:58 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:06.386 12:59:58 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:06.386 No valid GPT data, bailing 00:04:06.386 12:59:58 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:06.386 12:59:58 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:06.386 12:59:58 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:06.386 12:59:58 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:06.386 12:59:58 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:06.386 12:59:58 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:06.386 12:59:58 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:06.386 12:59:58 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:06.386 12:59:58 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:06.386 12:59:58 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:06.386 12:59:58 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:06.386 12:59:58 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:06.386 12:59:58 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:06.386 12:59:58 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:06.386 12:59:58 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:06.386 12:59:58 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:06.386 12:59:58 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:06.386 12:59:58 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:06.386 No valid GPT data, bailing 00:04:06.386 12:59:58 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:06.386 12:59:58 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:06.386 12:59:58 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:06.386 12:59:58 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:06.386 12:59:58 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:06.386 12:59:58 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:06.386 12:59:58 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:06.386 12:59:58 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:06.386 12:59:58 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:06.386 12:59:58 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:06.386 12:59:58 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:06.386 12:59:58 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:06.386 12:59:58 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:06.386 12:59:58 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:06.386 12:59:58 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:06.386 12:59:58 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:06.386 12:59:58 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:06.386 12:59:58 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:06.644 No valid GPT data, bailing 00:04:06.644 12:59:58 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:06.644 12:59:58 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:06.644 12:59:58 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:06.644 12:59:58 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:06.644 12:59:58 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:06.644 12:59:58 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:06.644 12:59:58 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:06.645 12:59:58 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:06.645 12:59:58 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:06.645 12:59:58 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:06.645 12:59:58 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:06.645 12:59:58 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:06.645 12:59:58 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:06.645 12:59:58 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.645 12:59:58 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.645 12:59:58 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:06.645 ************************************ 00:04:06.645 START TEST nvme_mount 00:04:06.645 ************************************ 00:04:06.645 12:59:58 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:06.645 12:59:58 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:06.645 12:59:58 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:06.645 12:59:58 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:06.645 12:59:58 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:06.645 12:59:58 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:06.645 12:59:58 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:06.645 12:59:58 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:06.645 12:59:58 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:06.645 12:59:58 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:06.645 12:59:58 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:06.645 12:59:58 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:06.645 12:59:58 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:06.645 12:59:58 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:06.645 12:59:58 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:06.645 12:59:58 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:06.645 12:59:58 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:06.645 12:59:58 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:06.645 12:59:58 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:06.645 12:59:58 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:08.020 Creating new GPT entries in memory. 00:04:08.020 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:08.020 other utilities. 00:04:08.020 12:59:59 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:08.020 12:59:59 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:08.020 12:59:59 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:08.020 12:59:59 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:08.020 12:59:59 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:08.953 Creating new GPT entries in memory. 00:04:08.953 The operation has completed successfully. 00:04:08.953 13:00:00 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:08.953 13:00:00 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:08.953 13:00:00 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 56993 00:04:08.953 13:00:00 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:08.953 13:00:00 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:08.953 13:00:00 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:08.953 13:00:00 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:08.953 13:00:00 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:08.953 13:00:01 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:08.953 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:08.953 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:08.953 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:08.953 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:08.953 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:08.953 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:08.953 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:08.953 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:08.953 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:08.953 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.953 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:08.953 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:08.953 13:00:01 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.953 13:00:01 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:09.211 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:09.211 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:09.211 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:09.211 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.211 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:09.211 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.211 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:09.211 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.470 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:09.470 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.470 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:09.470 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:09.470 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:09.470 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:09.470 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:09.470 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:09.470 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:09.470 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:09.470 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:09.470 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:09.470 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:09.470 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:09.470 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:09.728 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:09.728 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:09.728 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:09.728 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:09.728 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:09.728 13:00:01 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:09.728 13:00:01 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:09.728 13:00:01 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:09.728 13:00:01 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:09.728 13:00:01 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:09.728 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:09.728 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:09.728 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:09.728 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:09.728 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:09.728 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:09.728 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:09.728 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:09.728 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:09.728 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.728 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:09.728 13:00:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:09.728 13:00:01 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.728 13:00:01 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:09.985 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:09.985 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:09.985 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:09.985 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.985 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:09.985 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.985 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:09.985 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.242 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:10.242 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.242 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:10.242 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:10.242 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:10.242 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:10.242 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:10.242 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:10.242 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:10.242 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:10.242 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:10.242 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:10.242 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:10.242 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:10.242 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:10.242 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:10.242 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.242 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:10.242 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:10.242 13:00:02 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.242 13:00:02 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:10.499 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:10.499 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:10.499 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:10.499 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.499 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:10.499 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.755 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:10.755 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.755 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:10.755 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.755 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:10.755 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:10.755 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:10.755 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:10.755 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:10.755 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:10.755 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:10.755 13:00:02 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:10.755 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:10.755 00:04:10.755 real 0m4.225s 00:04:10.755 user 0m0.685s 00:04:10.755 sys 0m0.973s 00:04:10.755 13:00:02 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.755 13:00:02 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:10.755 ************************************ 00:04:10.755 END TEST nvme_mount 00:04:10.755 ************************************ 00:04:10.755 13:00:02 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:10.755 13:00:02 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:10.755 13:00:02 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.755 13:00:02 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:10.755 ************************************ 00:04:10.755 START TEST dm_mount 00:04:10.755 ************************************ 00:04:10.755 13:00:02 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:10.755 13:00:02 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:10.755 13:00:02 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:10.755 13:00:02 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:10.755 13:00:02 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:10.755 13:00:02 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:10.755 13:00:02 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:10.755 13:00:02 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:10.755 13:00:02 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:10.755 13:00:02 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:10.755 13:00:02 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:10.755 13:00:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:10.755 13:00:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:10.755 13:00:02 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:10.755 13:00:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:10.755 13:00:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:10.755 13:00:02 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:10.755 13:00:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:10.755 13:00:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:10.755 13:00:02 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:10.755 13:00:02 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:10.755 13:00:02 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:12.154 Creating new GPT entries in memory. 00:04:12.154 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:12.154 other utilities. 00:04:12.154 13:00:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:12.154 13:00:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:12.154 13:00:03 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:12.154 13:00:03 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:12.154 13:00:03 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:13.088 Creating new GPT entries in memory. 00:04:13.088 The operation has completed successfully. 00:04:13.088 13:00:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:13.088 13:00:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:13.088 13:00:04 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:13.088 13:00:04 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:13.088 13:00:04 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:14.022 The operation has completed successfully. 00:04:14.022 13:00:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:14.022 13:00:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:14.022 13:00:05 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57430 00:04:14.022 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:14.022 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:14.022 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:14.022 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:14.022 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:14.022 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:14.022 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:14.022 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:14.022 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:14.022 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:14.022 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:14.022 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:14.022 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:14.022 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:14.022 13:00:06 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:14.022 13:00:06 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:14.022 13:00:06 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:14.022 13:00:06 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:14.022 13:00:06 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:14.022 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:14.022 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:14.022 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:14.022 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:14.022 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:14.022 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:14.022 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:14.023 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:14.023 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:14.023 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.023 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:14.023 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:14.023 13:00:06 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.023 13:00:06 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:14.281 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:14.281 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:14.281 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:14.281 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.281 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:14.281 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.281 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:14.281 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.539 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:14.539 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.539 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:14.539 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:14.539 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:14.539 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:14.539 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:14.539 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:14.539 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:14.539 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:14.539 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:14.539 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:14.539 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:14.539 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:14.539 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:14.539 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:14.539 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.539 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:14.539 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:14.539 13:00:06 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.539 13:00:06 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:14.796 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:14.796 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:14.796 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:14.796 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.796 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:14.796 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.796 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:14.796 13:00:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.055 13:00:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:15.055 13:00:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.055 13:00:07 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:15.055 13:00:07 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:15.055 13:00:07 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:15.055 13:00:07 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:15.055 13:00:07 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:15.055 13:00:07 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:15.055 13:00:07 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:15.055 13:00:07 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:15.055 13:00:07 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:15.055 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:15.055 13:00:07 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:15.055 13:00:07 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:15.055 00:04:15.055 real 0m4.235s 00:04:15.055 user 0m0.448s 00:04:15.055 sys 0m0.731s 00:04:15.055 13:00:07 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.055 ************************************ 00:04:15.055 END TEST dm_mount 00:04:15.055 ************************************ 00:04:15.055 13:00:07 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:15.055 13:00:07 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:15.055 13:00:07 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:15.055 13:00:07 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:15.055 13:00:07 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:15.055 13:00:07 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:15.055 13:00:07 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:15.055 13:00:07 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:15.313 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:15.313 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:15.313 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:15.313 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:15.313 13:00:07 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:15.313 13:00:07 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:15.313 13:00:07 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:15.313 13:00:07 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:15.313 13:00:07 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:15.313 13:00:07 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:15.313 13:00:07 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:15.313 ************************************ 00:04:15.313 END TEST devices 00:04:15.313 ************************************ 00:04:15.313 00:04:15.313 real 0m9.969s 00:04:15.313 user 0m1.766s 00:04:15.313 sys 0m2.290s 00:04:15.313 13:00:07 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.313 13:00:07 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:15.571 00:04:15.571 real 0m21.691s 00:04:15.571 user 0m6.840s 00:04:15.571 sys 0m8.918s 00:04:15.571 13:00:07 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.571 ************************************ 00:04:15.571 13:00:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:15.571 END TEST setup.sh 00:04:15.571 ************************************ 00:04:15.571 13:00:07 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:16.137 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:16.137 Hugepages 00:04:16.137 node hugesize free / total 00:04:16.137 node0 1048576kB 0 / 0 00:04:16.137 node0 2048kB 2048 / 2048 00:04:16.137 00:04:16.137 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:16.137 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:16.396 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:16.396 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:16.396 13:00:08 -- spdk/autotest.sh@130 -- # uname -s 00:04:16.396 13:00:08 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:16.396 13:00:08 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:16.396 13:00:08 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:16.963 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:17.221 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:17.221 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:17.221 13:00:09 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:18.157 13:00:10 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:18.157 13:00:10 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:18.157 13:00:10 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:18.157 13:00:10 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:18.157 13:00:10 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:18.157 13:00:10 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:18.157 13:00:10 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:18.157 13:00:10 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:18.157 13:00:10 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:18.415 13:00:10 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:18.415 13:00:10 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:18.415 13:00:10 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:18.674 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.674 Waiting for block devices as requested 00:04:18.674 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:18.932 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:18.932 13:00:10 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:18.932 13:00:10 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:18.932 13:00:10 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:18.932 13:00:10 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:18.932 13:00:10 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:18.932 13:00:10 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:18.932 13:00:10 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:18.932 13:00:10 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:18.932 13:00:10 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:18.932 13:00:10 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:18.932 13:00:10 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:18.932 13:00:10 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:18.932 13:00:10 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:18.932 13:00:10 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:18.932 13:00:10 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:18.932 13:00:10 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:18.932 13:00:10 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:18.932 13:00:10 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:18.932 13:00:10 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:18.932 13:00:10 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:18.932 13:00:10 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:18.932 13:00:10 -- common/autotest_common.sh@1557 -- # continue 00:04:18.932 13:00:10 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:18.932 13:00:10 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:18.932 13:00:10 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:18.932 13:00:10 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:18.932 13:00:10 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:18.932 13:00:10 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:18.932 13:00:10 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:18.933 13:00:10 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:18.933 13:00:10 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:18.933 13:00:10 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:18.933 13:00:10 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:18.933 13:00:10 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:18.933 13:00:10 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:18.933 13:00:10 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:18.933 13:00:10 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:18.933 13:00:10 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:18.933 13:00:10 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:18.933 13:00:10 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:18.933 13:00:10 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:18.933 13:00:11 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:18.933 13:00:11 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:18.933 13:00:11 -- common/autotest_common.sh@1557 -- # continue 00:04:18.933 13:00:11 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:18.933 13:00:11 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:18.933 13:00:11 -- common/autotest_common.sh@10 -- # set +x 00:04:18.933 13:00:11 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:18.933 13:00:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:18.933 13:00:11 -- common/autotest_common.sh@10 -- # set +x 00:04:18.933 13:00:11 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:19.572 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:19.829 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:19.829 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:19.829 13:00:11 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:19.829 13:00:11 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:19.829 13:00:11 -- common/autotest_common.sh@10 -- # set +x 00:04:19.829 13:00:11 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:19.829 13:00:11 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:19.829 13:00:11 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:19.829 13:00:11 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:19.829 13:00:11 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:19.829 13:00:11 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:19.829 13:00:11 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:19.829 13:00:11 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:19.829 13:00:11 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:19.829 13:00:11 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:19.829 13:00:11 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:19.829 13:00:11 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:19.829 13:00:11 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:19.829 13:00:11 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:19.829 13:00:11 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:19.829 13:00:11 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:19.829 13:00:11 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:19.829 13:00:11 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:19.829 13:00:11 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:19.829 13:00:11 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:19.829 13:00:11 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:19.829 13:00:11 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:19.829 13:00:11 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:19.829 13:00:11 -- common/autotest_common.sh@1593 -- # return 0 00:04:19.829 13:00:11 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:19.829 13:00:11 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:19.829 13:00:11 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:19.829 13:00:11 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:19.829 13:00:11 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:19.829 13:00:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:19.829 13:00:11 -- common/autotest_common.sh@10 -- # set +x 00:04:19.829 13:00:12 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:04:19.829 13:00:12 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:19.829 13:00:12 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:19.829 13:00:12 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:19.829 13:00:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:19.829 13:00:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:19.829 13:00:12 -- common/autotest_common.sh@10 -- # set +x 00:04:19.829 ************************************ 00:04:19.829 START TEST env 00:04:19.829 ************************************ 00:04:19.829 13:00:12 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:20.088 * Looking for test storage... 00:04:20.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:20.088 13:00:12 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:20.088 13:00:12 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.088 13:00:12 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.088 13:00:12 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.088 ************************************ 00:04:20.088 START TEST env_memory 00:04:20.088 ************************************ 00:04:20.088 13:00:12 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:20.088 00:04:20.088 00:04:20.088 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.088 http://cunit.sourceforge.net/ 00:04:20.088 00:04:20.088 00:04:20.088 Suite: memory 00:04:20.088 Test: alloc and free memory map ...[2024-07-25 13:00:12.155099] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:20.088 passed 00:04:20.088 Test: mem map translation ...[2024-07-25 13:00:12.186231] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:20.088 [2024-07-25 13:00:12.186287] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:20.088 [2024-07-25 13:00:12.186342] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:20.088 [2024-07-25 13:00:12.186354] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:20.088 passed 00:04:20.088 Test: mem map registration ...[2024-07-25 13:00:12.250006] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:20.088 [2024-07-25 13:00:12.250059] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:20.088 passed 00:04:20.346 Test: mem map adjacent registrations ...passed 00:04:20.346 00:04:20.346 Run Summary: Type Total Ran Passed Failed Inactive 00:04:20.346 suites 1 1 n/a 0 0 00:04:20.346 tests 4 4 4 0 0 00:04:20.346 asserts 152 152 152 0 n/a 00:04:20.346 00:04:20.346 Elapsed time = 0.213 seconds 00:04:20.346 00:04:20.346 real 0m0.230s 00:04:20.346 user 0m0.215s 00:04:20.346 sys 0m0.012s 00:04:20.346 13:00:12 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:20.346 13:00:12 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:20.346 ************************************ 00:04:20.346 END TEST env_memory 00:04:20.346 ************************************ 00:04:20.346 13:00:12 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:20.346 13:00:12 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.346 13:00:12 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.346 13:00:12 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.346 ************************************ 00:04:20.346 START TEST env_vtophys 00:04:20.346 ************************************ 00:04:20.346 13:00:12 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:20.346 EAL: lib.eal log level changed from notice to debug 00:04:20.346 EAL: Detected lcore 0 as core 0 on socket 0 00:04:20.347 EAL: Detected lcore 1 as core 0 on socket 0 00:04:20.347 EAL: Detected lcore 2 as core 0 on socket 0 00:04:20.347 EAL: Detected lcore 3 as core 0 on socket 0 00:04:20.347 EAL: Detected lcore 4 as core 0 on socket 0 00:04:20.347 EAL: Detected lcore 5 as core 0 on socket 0 00:04:20.347 EAL: Detected lcore 6 as core 0 on socket 0 00:04:20.347 EAL: Detected lcore 7 as core 0 on socket 0 00:04:20.347 EAL: Detected lcore 8 as core 0 on socket 0 00:04:20.347 EAL: Detected lcore 9 as core 0 on socket 0 00:04:20.347 EAL: Maximum logical cores by configuration: 128 00:04:20.347 EAL: Detected CPU lcores: 10 00:04:20.347 EAL: Detected NUMA nodes: 1 00:04:20.347 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:20.347 EAL: Detected shared linkage of DPDK 00:04:20.347 EAL: No shared files mode enabled, IPC will be disabled 00:04:20.347 EAL: Selected IOVA mode 'PA' 00:04:20.347 EAL: Probing VFIO support... 00:04:20.347 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:20.347 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:20.347 EAL: Ask a virtual area of 0x2e000 bytes 00:04:20.347 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:20.347 EAL: Setting up physically contiguous memory... 00:04:20.347 EAL: Setting maximum number of open files to 524288 00:04:20.347 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:20.347 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:20.347 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.347 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:20.347 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.347 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.347 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:20.347 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:20.347 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.347 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:20.347 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.347 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.347 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:20.347 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:20.347 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.347 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:20.347 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.347 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.347 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:20.347 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:20.347 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.347 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:20.347 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.347 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.347 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:20.347 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:20.347 EAL: Hugepages will be freed exactly as allocated. 00:04:20.347 EAL: No shared files mode enabled, IPC is disabled 00:04:20.347 EAL: No shared files mode enabled, IPC is disabled 00:04:20.347 EAL: TSC frequency is ~2200000 KHz 00:04:20.347 EAL: Main lcore 0 is ready (tid=7f441141ea00;cpuset=[0]) 00:04:20.347 EAL: Trying to obtain current memory policy. 00:04:20.347 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.347 EAL: Restoring previous memory policy: 0 00:04:20.347 EAL: request: mp_malloc_sync 00:04:20.347 EAL: No shared files mode enabled, IPC is disabled 00:04:20.347 EAL: Heap on socket 0 was expanded by 2MB 00:04:20.347 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:20.347 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:20.347 EAL: Mem event callback 'spdk:(nil)' registered 00:04:20.347 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:20.605 00:04:20.605 00:04:20.605 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.605 http://cunit.sourceforge.net/ 00:04:20.605 00:04:20.605 00:04:20.605 Suite: components_suite 00:04:20.605 Test: vtophys_malloc_test ...passed 00:04:20.605 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:20.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.605 EAL: Restoring previous memory policy: 4 00:04:20.605 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.605 EAL: request: mp_malloc_sync 00:04:20.605 EAL: No shared files mode enabled, IPC is disabled 00:04:20.605 EAL: Heap on socket 0 was expanded by 4MB 00:04:20.605 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.605 EAL: request: mp_malloc_sync 00:04:20.605 EAL: No shared files mode enabled, IPC is disabled 00:04:20.605 EAL: Heap on socket 0 was shrunk by 4MB 00:04:20.605 EAL: Trying to obtain current memory policy. 00:04:20.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.605 EAL: Restoring previous memory policy: 4 00:04:20.605 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.605 EAL: request: mp_malloc_sync 00:04:20.605 EAL: No shared files mode enabled, IPC is disabled 00:04:20.605 EAL: Heap on socket 0 was expanded by 6MB 00:04:20.605 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.605 EAL: request: mp_malloc_sync 00:04:20.605 EAL: No shared files mode enabled, IPC is disabled 00:04:20.605 EAL: Heap on socket 0 was shrunk by 6MB 00:04:20.605 EAL: Trying to obtain current memory policy. 00:04:20.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.605 EAL: Restoring previous memory policy: 4 00:04:20.605 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.605 EAL: request: mp_malloc_sync 00:04:20.605 EAL: No shared files mode enabled, IPC is disabled 00:04:20.605 EAL: Heap on socket 0 was expanded by 10MB 00:04:20.605 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.605 EAL: request: mp_malloc_sync 00:04:20.605 EAL: No shared files mode enabled, IPC is disabled 00:04:20.605 EAL: Heap on socket 0 was shrunk by 10MB 00:04:20.605 EAL: Trying to obtain current memory policy. 00:04:20.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.605 EAL: Restoring previous memory policy: 4 00:04:20.605 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.605 EAL: request: mp_malloc_sync 00:04:20.605 EAL: No shared files mode enabled, IPC is disabled 00:04:20.605 EAL: Heap on socket 0 was expanded by 18MB 00:04:20.605 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.605 EAL: request: mp_malloc_sync 00:04:20.605 EAL: No shared files mode enabled, IPC is disabled 00:04:20.605 EAL: Heap on socket 0 was shrunk by 18MB 00:04:20.605 EAL: Trying to obtain current memory policy. 00:04:20.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.605 EAL: Restoring previous memory policy: 4 00:04:20.605 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.605 EAL: request: mp_malloc_sync 00:04:20.605 EAL: No shared files mode enabled, IPC is disabled 00:04:20.605 EAL: Heap on socket 0 was expanded by 34MB 00:04:20.605 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.605 EAL: request: mp_malloc_sync 00:04:20.605 EAL: No shared files mode enabled, IPC is disabled 00:04:20.605 EAL: Heap on socket 0 was shrunk by 34MB 00:04:20.605 EAL: Trying to obtain current memory policy. 00:04:20.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.605 EAL: Restoring previous memory policy: 4 00:04:20.605 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.605 EAL: request: mp_malloc_sync 00:04:20.605 EAL: No shared files mode enabled, IPC is disabled 00:04:20.605 EAL: Heap on socket 0 was expanded by 66MB 00:04:20.605 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.605 EAL: request: mp_malloc_sync 00:04:20.605 EAL: No shared files mode enabled, IPC is disabled 00:04:20.605 EAL: Heap on socket 0 was shrunk by 66MB 00:04:20.605 EAL: Trying to obtain current memory policy. 00:04:20.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.605 EAL: Restoring previous memory policy: 4 00:04:20.605 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.605 EAL: request: mp_malloc_sync 00:04:20.605 EAL: No shared files mode enabled, IPC is disabled 00:04:20.605 EAL: Heap on socket 0 was expanded by 130MB 00:04:20.605 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.605 EAL: request: mp_malloc_sync 00:04:20.605 EAL: No shared files mode enabled, IPC is disabled 00:04:20.605 EAL: Heap on socket 0 was shrunk by 130MB 00:04:20.605 EAL: Trying to obtain current memory policy. 00:04:20.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.605 EAL: Restoring previous memory policy: 4 00:04:20.605 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.605 EAL: request: mp_malloc_sync 00:04:20.605 EAL: No shared files mode enabled, IPC is disabled 00:04:20.605 EAL: Heap on socket 0 was expanded by 258MB 00:04:20.605 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.864 EAL: request: mp_malloc_sync 00:04:20.864 EAL: No shared files mode enabled, IPC is disabled 00:04:20.864 EAL: Heap on socket 0 was shrunk by 258MB 00:04:20.864 EAL: Trying to obtain current memory policy. 00:04:20.864 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.864 EAL: Restoring previous memory policy: 4 00:04:20.864 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.864 EAL: request: mp_malloc_sync 00:04:20.864 EAL: No shared files mode enabled, IPC is disabled 00:04:20.864 EAL: Heap on socket 0 was expanded by 514MB 00:04:21.123 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.123 EAL: request: mp_malloc_sync 00:04:21.123 EAL: No shared files mode enabled, IPC is disabled 00:04:21.123 EAL: Heap on socket 0 was shrunk by 514MB 00:04:21.123 EAL: Trying to obtain current memory policy. 00:04:21.123 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.381 EAL: Restoring previous memory policy: 4 00:04:21.381 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.381 EAL: request: mp_malloc_sync 00:04:21.381 EAL: No shared files mode enabled, IPC is disabled 00:04:21.381 EAL: Heap on socket 0 was expanded by 1026MB 00:04:21.638 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.897 passed 00:04:21.897 00:04:21.897 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.897 suites 1 1 n/a 0 0 00:04:21.897 tests 2 2 2 0 0 00:04:21.897 asserts 5218 5218 5218 0 n/a 00:04:21.897 00:04:21.897 Elapsed time = 1.265 seconds 00:04:21.897 EAL: request: mp_malloc_sync 00:04:21.897 EAL: No shared files mode enabled, IPC is disabled 00:04:21.897 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:21.897 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.897 EAL: request: mp_malloc_sync 00:04:21.897 EAL: No shared files mode enabled, IPC is disabled 00:04:21.897 EAL: Heap on socket 0 was shrunk by 2MB 00:04:21.897 EAL: No shared files mode enabled, IPC is disabled 00:04:21.897 EAL: No shared files mode enabled, IPC is disabled 00:04:21.897 EAL: No shared files mode enabled, IPC is disabled 00:04:21.897 00:04:21.897 real 0m1.471s 00:04:21.897 user 0m0.802s 00:04:21.897 sys 0m0.527s 00:04:21.897 13:00:13 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:21.897 13:00:13 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:21.897 ************************************ 00:04:21.897 END TEST env_vtophys 00:04:21.898 ************************************ 00:04:21.898 13:00:13 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:21.898 13:00:13 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:21.898 13:00:13 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:21.898 13:00:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.898 ************************************ 00:04:21.898 START TEST env_pci 00:04:21.898 ************************************ 00:04:21.898 13:00:13 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:21.898 00:04:21.898 00:04:21.898 CUnit - A unit testing framework for C - Version 2.1-3 00:04:21.898 http://cunit.sourceforge.net/ 00:04:21.898 00:04:21.898 00:04:21.898 Suite: pci 00:04:21.898 Test: pci_hook ...[2024-07-25 13:00:13.927620] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58624 has claimed it 00:04:21.898 passed 00:04:21.898 00:04:21.898 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.898 suites 1 1 n/a 0 0 00:04:21.898 tests 1 1 1 0 0 00:04:21.898 asserts 25 25 25 0 n/a 00:04:21.898 00:04:21.898 Elapsed time = 0.002 seconds 00:04:21.898 EAL: Cannot find device (10000:00:01.0) 00:04:21.898 EAL: Failed to attach device on primary process 00:04:21.898 00:04:21.898 real 0m0.020s 00:04:21.898 user 0m0.010s 00:04:21.898 sys 0m0.010s 00:04:21.898 13:00:13 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:21.898 13:00:13 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:21.898 ************************************ 00:04:21.898 END TEST env_pci 00:04:21.898 ************************************ 00:04:21.898 13:00:13 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:21.898 13:00:13 env -- env/env.sh@15 -- # uname 00:04:21.898 13:00:13 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:21.898 13:00:13 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:21.898 13:00:13 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:21.898 13:00:13 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:21.898 13:00:13 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:21.898 13:00:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.898 ************************************ 00:04:21.898 START TEST env_dpdk_post_init 00:04:21.898 ************************************ 00:04:21.898 13:00:13 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:21.898 EAL: Detected CPU lcores: 10 00:04:21.898 EAL: Detected NUMA nodes: 1 00:04:21.898 EAL: Detected shared linkage of DPDK 00:04:21.898 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:21.898 EAL: Selected IOVA mode 'PA' 00:04:22.156 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:22.156 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:22.156 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:22.156 Starting DPDK initialization... 00:04:22.156 Starting SPDK post initialization... 00:04:22.156 SPDK NVMe probe 00:04:22.156 Attaching to 0000:00:10.0 00:04:22.156 Attaching to 0000:00:11.0 00:04:22.156 Attached to 0000:00:10.0 00:04:22.156 Attached to 0000:00:11.0 00:04:22.156 Cleaning up... 00:04:22.156 00:04:22.156 real 0m0.180s 00:04:22.156 user 0m0.044s 00:04:22.156 sys 0m0.036s 00:04:22.156 13:00:14 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.156 13:00:14 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:22.156 ************************************ 00:04:22.156 END TEST env_dpdk_post_init 00:04:22.156 ************************************ 00:04:22.156 13:00:14 env -- env/env.sh@26 -- # uname 00:04:22.156 13:00:14 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:22.156 13:00:14 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:22.156 13:00:14 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.156 13:00:14 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.156 13:00:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.156 ************************************ 00:04:22.156 START TEST env_mem_callbacks 00:04:22.156 ************************************ 00:04:22.156 13:00:14 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:22.156 EAL: Detected CPU lcores: 10 00:04:22.156 EAL: Detected NUMA nodes: 1 00:04:22.156 EAL: Detected shared linkage of DPDK 00:04:22.156 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:22.156 EAL: Selected IOVA mode 'PA' 00:04:22.413 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:22.413 00:04:22.413 00:04:22.413 CUnit - A unit testing framework for C - Version 2.1-3 00:04:22.413 http://cunit.sourceforge.net/ 00:04:22.413 00:04:22.413 00:04:22.413 Suite: memory 00:04:22.413 Test: test ... 00:04:22.413 register 0x200000200000 2097152 00:04:22.413 malloc 3145728 00:04:22.413 register 0x200000400000 4194304 00:04:22.413 buf 0x200000500000 len 3145728 PASSED 00:04:22.413 malloc 64 00:04:22.413 buf 0x2000004fff40 len 64 PASSED 00:04:22.413 malloc 4194304 00:04:22.413 register 0x200000800000 6291456 00:04:22.413 buf 0x200000a00000 len 4194304 PASSED 00:04:22.413 free 0x200000500000 3145728 00:04:22.413 free 0x2000004fff40 64 00:04:22.413 unregister 0x200000400000 4194304 PASSED 00:04:22.413 free 0x200000a00000 4194304 00:04:22.413 unregister 0x200000800000 6291456 PASSED 00:04:22.413 malloc 8388608 00:04:22.413 register 0x200000400000 10485760 00:04:22.413 buf 0x200000600000 len 8388608 PASSED 00:04:22.413 free 0x200000600000 8388608 00:04:22.413 unregister 0x200000400000 10485760 PASSED 00:04:22.413 passed 00:04:22.413 00:04:22.413 Run Summary: Type Total Ran Passed Failed Inactive 00:04:22.413 suites 1 1 n/a 0 0 00:04:22.413 tests 1 1 1 0 0 00:04:22.413 asserts 15 15 15 0 n/a 00:04:22.413 00:04:22.413 Elapsed time = 0.009 seconds 00:04:22.413 00:04:22.413 real 0m0.146s 00:04:22.413 user 0m0.022s 00:04:22.413 sys 0m0.022s 00:04:22.413 13:00:14 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.413 13:00:14 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:22.413 ************************************ 00:04:22.413 END TEST env_mem_callbacks 00:04:22.413 ************************************ 00:04:22.413 00:04:22.413 real 0m2.388s 00:04:22.413 user 0m1.200s 00:04:22.413 sys 0m0.827s 00:04:22.413 13:00:14 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.413 ************************************ 00:04:22.413 END TEST env 00:04:22.413 ************************************ 00:04:22.413 13:00:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.413 13:00:14 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:22.413 13:00:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.413 13:00:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.413 13:00:14 -- common/autotest_common.sh@10 -- # set +x 00:04:22.413 ************************************ 00:04:22.413 START TEST rpc 00:04:22.413 ************************************ 00:04:22.413 13:00:14 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:22.413 * Looking for test storage... 00:04:22.413 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:22.413 13:00:14 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58739 00:04:22.414 13:00:14 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:22.414 13:00:14 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:22.414 13:00:14 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58739 00:04:22.414 13:00:14 rpc -- common/autotest_common.sh@831 -- # '[' -z 58739 ']' 00:04:22.414 13:00:14 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.414 13:00:14 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:22.414 13:00:14 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.414 13:00:14 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:22.414 13:00:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.414 [2024-07-25 13:00:14.591664] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:22.414 [2024-07-25 13:00:14.591788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58739 ] 00:04:22.671 [2024-07-25 13:00:14.730933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.671 [2024-07-25 13:00:14.833766] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:22.671 [2024-07-25 13:00:14.833858] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58739' to capture a snapshot of events at runtime. 00:04:22.671 [2024-07-25 13:00:14.833888] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:22.671 [2024-07-25 13:00:14.833897] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:22.671 [2024-07-25 13:00:14.833904] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58739 for offline analysis/debug. 00:04:22.671 [2024-07-25 13:00:14.833937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.929 [2024-07-25 13:00:14.891228] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:23.497 13:00:15 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:23.497 13:00:15 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:23.497 13:00:15 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:23.497 13:00:15 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:23.497 13:00:15 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:23.497 13:00:15 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:23.497 13:00:15 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:23.497 13:00:15 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:23.497 13:00:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.497 ************************************ 00:04:23.497 START TEST rpc_integrity 00:04:23.497 ************************************ 00:04:23.497 13:00:15 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:23.497 13:00:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:23.497 13:00:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.497 13:00:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.497 13:00:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.497 13:00:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:23.497 13:00:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:23.756 13:00:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:23.756 13:00:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:23.756 13:00:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.756 13:00:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.756 13:00:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.756 13:00:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:23.756 13:00:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:23.756 13:00:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.756 13:00:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.756 13:00:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.756 13:00:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:23.756 { 00:04:23.756 "name": "Malloc0", 00:04:23.756 "aliases": [ 00:04:23.756 "602c6eba-06d3-4348-b947-01eb6c75c53b" 00:04:23.756 ], 00:04:23.756 "product_name": "Malloc disk", 00:04:23.756 "block_size": 512, 00:04:23.756 "num_blocks": 16384, 00:04:23.756 "uuid": "602c6eba-06d3-4348-b947-01eb6c75c53b", 00:04:23.756 "assigned_rate_limits": { 00:04:23.756 "rw_ios_per_sec": 0, 00:04:23.756 "rw_mbytes_per_sec": 0, 00:04:23.756 "r_mbytes_per_sec": 0, 00:04:23.756 "w_mbytes_per_sec": 0 00:04:23.756 }, 00:04:23.756 "claimed": false, 00:04:23.756 "zoned": false, 00:04:23.756 "supported_io_types": { 00:04:23.756 "read": true, 00:04:23.756 "write": true, 00:04:23.756 "unmap": true, 00:04:23.756 "flush": true, 00:04:23.756 "reset": true, 00:04:23.756 "nvme_admin": false, 00:04:23.756 "nvme_io": false, 00:04:23.756 "nvme_io_md": false, 00:04:23.756 "write_zeroes": true, 00:04:23.756 "zcopy": true, 00:04:23.756 "get_zone_info": false, 00:04:23.756 "zone_management": false, 00:04:23.756 "zone_append": false, 00:04:23.756 "compare": false, 00:04:23.756 "compare_and_write": false, 00:04:23.756 "abort": true, 00:04:23.756 "seek_hole": false, 00:04:23.756 "seek_data": false, 00:04:23.756 "copy": true, 00:04:23.756 "nvme_iov_md": false 00:04:23.756 }, 00:04:23.756 "memory_domains": [ 00:04:23.756 { 00:04:23.756 "dma_device_id": "system", 00:04:23.756 "dma_device_type": 1 00:04:23.756 }, 00:04:23.756 { 00:04:23.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.756 "dma_device_type": 2 00:04:23.756 } 00:04:23.756 ], 00:04:23.756 "driver_specific": {} 00:04:23.756 } 00:04:23.756 ]' 00:04:23.756 13:00:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:23.756 13:00:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:23.756 13:00:15 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:23.756 13:00:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.756 13:00:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.756 [2024-07-25 13:00:15.789755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:23.756 [2024-07-25 13:00:15.789845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:23.756 [2024-07-25 13:00:15.789891] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfcfda0 00:04:23.756 [2024-07-25 13:00:15.789911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:23.756 [2024-07-25 13:00:15.791589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:23.756 [2024-07-25 13:00:15.791639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:23.756 Passthru0 00:04:23.756 13:00:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.756 13:00:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:23.756 13:00:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.756 13:00:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.756 13:00:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.756 13:00:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:23.756 { 00:04:23.756 "name": "Malloc0", 00:04:23.756 "aliases": [ 00:04:23.756 "602c6eba-06d3-4348-b947-01eb6c75c53b" 00:04:23.756 ], 00:04:23.756 "product_name": "Malloc disk", 00:04:23.756 "block_size": 512, 00:04:23.756 "num_blocks": 16384, 00:04:23.756 "uuid": "602c6eba-06d3-4348-b947-01eb6c75c53b", 00:04:23.756 "assigned_rate_limits": { 00:04:23.756 "rw_ios_per_sec": 0, 00:04:23.756 "rw_mbytes_per_sec": 0, 00:04:23.756 "r_mbytes_per_sec": 0, 00:04:23.756 "w_mbytes_per_sec": 0 00:04:23.756 }, 00:04:23.756 "claimed": true, 00:04:23.756 "claim_type": "exclusive_write", 00:04:23.756 "zoned": false, 00:04:23.756 "supported_io_types": { 00:04:23.756 "read": true, 00:04:23.756 "write": true, 00:04:23.756 "unmap": true, 00:04:23.756 "flush": true, 00:04:23.756 "reset": true, 00:04:23.756 "nvme_admin": false, 00:04:23.756 "nvme_io": false, 00:04:23.756 "nvme_io_md": false, 00:04:23.756 "write_zeroes": true, 00:04:23.756 "zcopy": true, 00:04:23.756 "get_zone_info": false, 00:04:23.756 "zone_management": false, 00:04:23.756 "zone_append": false, 00:04:23.756 "compare": false, 00:04:23.756 "compare_and_write": false, 00:04:23.756 "abort": true, 00:04:23.756 "seek_hole": false, 00:04:23.756 "seek_data": false, 00:04:23.756 "copy": true, 00:04:23.756 "nvme_iov_md": false 00:04:23.756 }, 00:04:23.756 "memory_domains": [ 00:04:23.756 { 00:04:23.756 "dma_device_id": "system", 00:04:23.756 "dma_device_type": 1 00:04:23.756 }, 00:04:23.756 { 00:04:23.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.756 "dma_device_type": 2 00:04:23.756 } 00:04:23.756 ], 00:04:23.756 "driver_specific": {} 00:04:23.756 }, 00:04:23.756 { 00:04:23.757 "name": "Passthru0", 00:04:23.757 "aliases": [ 00:04:23.757 "795986cd-368c-5fa7-a28b-1b6c06df88b0" 00:04:23.757 ], 00:04:23.757 "product_name": "passthru", 00:04:23.757 "block_size": 512, 00:04:23.757 "num_blocks": 16384, 00:04:23.757 "uuid": "795986cd-368c-5fa7-a28b-1b6c06df88b0", 00:04:23.757 "assigned_rate_limits": { 00:04:23.757 "rw_ios_per_sec": 0, 00:04:23.757 "rw_mbytes_per_sec": 0, 00:04:23.757 "r_mbytes_per_sec": 0, 00:04:23.757 "w_mbytes_per_sec": 0 00:04:23.757 }, 00:04:23.757 "claimed": false, 00:04:23.757 "zoned": false, 00:04:23.757 "supported_io_types": { 00:04:23.757 "read": true, 00:04:23.757 "write": true, 00:04:23.757 "unmap": true, 00:04:23.757 "flush": true, 00:04:23.757 "reset": true, 00:04:23.757 "nvme_admin": false, 00:04:23.757 "nvme_io": false, 00:04:23.757 "nvme_io_md": false, 00:04:23.757 "write_zeroes": true, 00:04:23.757 "zcopy": true, 00:04:23.757 "get_zone_info": false, 00:04:23.757 "zone_management": false, 00:04:23.757 "zone_append": false, 00:04:23.757 "compare": false, 00:04:23.757 "compare_and_write": false, 00:04:23.757 "abort": true, 00:04:23.757 "seek_hole": false, 00:04:23.757 "seek_data": false, 00:04:23.757 "copy": true, 00:04:23.757 "nvme_iov_md": false 00:04:23.757 }, 00:04:23.757 "memory_domains": [ 00:04:23.757 { 00:04:23.757 "dma_device_id": "system", 00:04:23.757 "dma_device_type": 1 00:04:23.757 }, 00:04:23.757 { 00:04:23.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.757 "dma_device_type": 2 00:04:23.757 } 00:04:23.757 ], 00:04:23.757 "driver_specific": { 00:04:23.757 "passthru": { 00:04:23.757 "name": "Passthru0", 00:04:23.757 "base_bdev_name": "Malloc0" 00:04:23.757 } 00:04:23.757 } 00:04:23.757 } 00:04:23.757 ]' 00:04:23.757 13:00:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:23.757 13:00:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:23.757 13:00:15 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:23.757 13:00:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.757 13:00:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.757 13:00:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.757 13:00:15 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:23.757 13:00:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.757 13:00:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.757 13:00:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.757 13:00:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:23.757 13:00:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.757 13:00:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.757 13:00:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.757 13:00:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:23.757 13:00:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:23.757 13:00:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:23.757 00:04:23.757 real 0m0.303s 00:04:23.757 user 0m0.198s 00:04:23.757 sys 0m0.037s 00:04:23.757 13:00:15 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.757 13:00:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.757 ************************************ 00:04:23.757 END TEST rpc_integrity 00:04:23.757 ************************************ 00:04:24.016 13:00:15 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:24.016 13:00:15 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.016 13:00:15 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.016 13:00:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.016 ************************************ 00:04:24.016 START TEST rpc_plugins 00:04:24.016 ************************************ 00:04:24.016 13:00:15 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:24.016 13:00:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:24.016 13:00:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.016 13:00:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.016 13:00:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.016 13:00:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:24.016 13:00:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:24.016 13:00:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.016 13:00:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.016 13:00:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.016 13:00:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:24.016 { 00:04:24.016 "name": "Malloc1", 00:04:24.016 "aliases": [ 00:04:24.016 "cca36cba-35f2-48d5-bbfb-c40677a99b8a" 00:04:24.016 ], 00:04:24.016 "product_name": "Malloc disk", 00:04:24.016 "block_size": 4096, 00:04:24.016 "num_blocks": 256, 00:04:24.016 "uuid": "cca36cba-35f2-48d5-bbfb-c40677a99b8a", 00:04:24.016 "assigned_rate_limits": { 00:04:24.016 "rw_ios_per_sec": 0, 00:04:24.016 "rw_mbytes_per_sec": 0, 00:04:24.016 "r_mbytes_per_sec": 0, 00:04:24.016 "w_mbytes_per_sec": 0 00:04:24.016 }, 00:04:24.016 "claimed": false, 00:04:24.016 "zoned": false, 00:04:24.016 "supported_io_types": { 00:04:24.016 "read": true, 00:04:24.016 "write": true, 00:04:24.016 "unmap": true, 00:04:24.016 "flush": true, 00:04:24.016 "reset": true, 00:04:24.016 "nvme_admin": false, 00:04:24.016 "nvme_io": false, 00:04:24.016 "nvme_io_md": false, 00:04:24.016 "write_zeroes": true, 00:04:24.016 "zcopy": true, 00:04:24.016 "get_zone_info": false, 00:04:24.016 "zone_management": false, 00:04:24.016 "zone_append": false, 00:04:24.016 "compare": false, 00:04:24.016 "compare_and_write": false, 00:04:24.016 "abort": true, 00:04:24.016 "seek_hole": false, 00:04:24.016 "seek_data": false, 00:04:24.016 "copy": true, 00:04:24.016 "nvme_iov_md": false 00:04:24.016 }, 00:04:24.016 "memory_domains": [ 00:04:24.016 { 00:04:24.016 "dma_device_id": "system", 00:04:24.016 "dma_device_type": 1 00:04:24.016 }, 00:04:24.016 { 00:04:24.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.016 "dma_device_type": 2 00:04:24.016 } 00:04:24.016 ], 00:04:24.016 "driver_specific": {} 00:04:24.016 } 00:04:24.016 ]' 00:04:24.016 13:00:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:24.016 13:00:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:24.016 13:00:16 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:24.016 13:00:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.016 13:00:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.016 13:00:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.016 13:00:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:24.016 13:00:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.016 13:00:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.016 13:00:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.016 13:00:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:24.016 13:00:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:24.016 13:00:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:24.016 00:04:24.016 real 0m0.149s 00:04:24.016 user 0m0.099s 00:04:24.016 sys 0m0.019s 00:04:24.016 13:00:16 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.016 ************************************ 00:04:24.016 END TEST rpc_plugins 00:04:24.016 13:00:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.016 ************************************ 00:04:24.016 13:00:16 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:24.016 13:00:16 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.016 13:00:16 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.016 13:00:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.016 ************************************ 00:04:24.016 START TEST rpc_trace_cmd_test 00:04:24.016 ************************************ 00:04:24.016 13:00:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:24.016 13:00:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:24.016 13:00:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:24.016 13:00:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.016 13:00:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:24.275 13:00:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.275 13:00:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:24.275 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58739", 00:04:24.275 "tpoint_group_mask": "0x8", 00:04:24.275 "iscsi_conn": { 00:04:24.275 "mask": "0x2", 00:04:24.275 "tpoint_mask": "0x0" 00:04:24.275 }, 00:04:24.275 "scsi": { 00:04:24.275 "mask": "0x4", 00:04:24.275 "tpoint_mask": "0x0" 00:04:24.275 }, 00:04:24.275 "bdev": { 00:04:24.275 "mask": "0x8", 00:04:24.275 "tpoint_mask": "0xffffffffffffffff" 00:04:24.275 }, 00:04:24.275 "nvmf_rdma": { 00:04:24.275 "mask": "0x10", 00:04:24.275 "tpoint_mask": "0x0" 00:04:24.275 }, 00:04:24.275 "nvmf_tcp": { 00:04:24.275 "mask": "0x20", 00:04:24.275 "tpoint_mask": "0x0" 00:04:24.275 }, 00:04:24.275 "ftl": { 00:04:24.275 "mask": "0x40", 00:04:24.275 "tpoint_mask": "0x0" 00:04:24.275 }, 00:04:24.275 "blobfs": { 00:04:24.275 "mask": "0x80", 00:04:24.275 "tpoint_mask": "0x0" 00:04:24.275 }, 00:04:24.275 "dsa": { 00:04:24.275 "mask": "0x200", 00:04:24.275 "tpoint_mask": "0x0" 00:04:24.275 }, 00:04:24.275 "thread": { 00:04:24.275 "mask": "0x400", 00:04:24.275 "tpoint_mask": "0x0" 00:04:24.275 }, 00:04:24.275 "nvme_pcie": { 00:04:24.275 "mask": "0x800", 00:04:24.275 "tpoint_mask": "0x0" 00:04:24.275 }, 00:04:24.275 "iaa": { 00:04:24.275 "mask": "0x1000", 00:04:24.275 "tpoint_mask": "0x0" 00:04:24.275 }, 00:04:24.275 "nvme_tcp": { 00:04:24.275 "mask": "0x2000", 00:04:24.275 "tpoint_mask": "0x0" 00:04:24.275 }, 00:04:24.275 "bdev_nvme": { 00:04:24.275 "mask": "0x4000", 00:04:24.275 "tpoint_mask": "0x0" 00:04:24.275 }, 00:04:24.275 "sock": { 00:04:24.275 "mask": "0x8000", 00:04:24.275 "tpoint_mask": "0x0" 00:04:24.275 } 00:04:24.275 }' 00:04:24.275 13:00:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:24.275 13:00:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:24.275 13:00:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:24.275 13:00:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:24.275 13:00:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:24.275 13:00:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:24.275 13:00:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:24.275 13:00:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:24.275 13:00:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:24.534 13:00:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:24.534 00:04:24.534 real 0m0.278s 00:04:24.534 user 0m0.240s 00:04:24.534 sys 0m0.027s 00:04:24.534 13:00:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.534 13:00:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:24.534 ************************************ 00:04:24.534 END TEST rpc_trace_cmd_test 00:04:24.534 ************************************ 00:04:24.534 13:00:16 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:24.534 13:00:16 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:24.534 13:00:16 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:24.534 13:00:16 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.534 13:00:16 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.534 13:00:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.534 ************************************ 00:04:24.534 START TEST rpc_daemon_integrity 00:04:24.534 ************************************ 00:04:24.534 13:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:24.534 13:00:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:24.534 13:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.534 13:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.534 13:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.534 13:00:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:24.534 13:00:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:24.534 13:00:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:24.534 13:00:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:24.534 13:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.534 13:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.534 13:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.534 13:00:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:24.534 13:00:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:24.534 13:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.534 13:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.534 13:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.534 13:00:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:24.534 { 00:04:24.534 "name": "Malloc2", 00:04:24.534 "aliases": [ 00:04:24.534 "be1066fc-ca4f-4913-97e3-f86594d2aa06" 00:04:24.534 ], 00:04:24.534 "product_name": "Malloc disk", 00:04:24.534 "block_size": 512, 00:04:24.534 "num_blocks": 16384, 00:04:24.534 "uuid": "be1066fc-ca4f-4913-97e3-f86594d2aa06", 00:04:24.534 "assigned_rate_limits": { 00:04:24.534 "rw_ios_per_sec": 0, 00:04:24.534 "rw_mbytes_per_sec": 0, 00:04:24.534 "r_mbytes_per_sec": 0, 00:04:24.534 "w_mbytes_per_sec": 0 00:04:24.534 }, 00:04:24.534 "claimed": false, 00:04:24.534 "zoned": false, 00:04:24.534 "supported_io_types": { 00:04:24.534 "read": true, 00:04:24.534 "write": true, 00:04:24.534 "unmap": true, 00:04:24.534 "flush": true, 00:04:24.534 "reset": true, 00:04:24.534 "nvme_admin": false, 00:04:24.534 "nvme_io": false, 00:04:24.534 "nvme_io_md": false, 00:04:24.534 "write_zeroes": true, 00:04:24.534 "zcopy": true, 00:04:24.534 "get_zone_info": false, 00:04:24.534 "zone_management": false, 00:04:24.534 "zone_append": false, 00:04:24.534 "compare": false, 00:04:24.534 "compare_and_write": false, 00:04:24.534 "abort": true, 00:04:24.534 "seek_hole": false, 00:04:24.534 "seek_data": false, 00:04:24.534 "copy": true, 00:04:24.534 "nvme_iov_md": false 00:04:24.534 }, 00:04:24.534 "memory_domains": [ 00:04:24.534 { 00:04:24.534 "dma_device_id": "system", 00:04:24.534 "dma_device_type": 1 00:04:24.534 }, 00:04:24.534 { 00:04:24.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.534 "dma_device_type": 2 00:04:24.534 } 00:04:24.534 ], 00:04:24.534 "driver_specific": {} 00:04:24.534 } 00:04:24.534 ]' 00:04:24.534 13:00:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:24.534 13:00:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:24.534 13:00:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:24.534 13:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.534 13:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.534 [2024-07-25 13:00:16.674571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:24.534 [2024-07-25 13:00:16.674673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:24.534 [2024-07-25 13:00:16.674705] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1034be0 00:04:24.534 [2024-07-25 13:00:16.674715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:24.534 [2024-07-25 13:00:16.676420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:24.534 [2024-07-25 13:00:16.676470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:24.534 Passthru0 00:04:24.535 13:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.535 13:00:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:24.535 13:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.535 13:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.535 13:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.535 13:00:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:24.535 { 00:04:24.535 "name": "Malloc2", 00:04:24.535 "aliases": [ 00:04:24.535 "be1066fc-ca4f-4913-97e3-f86594d2aa06" 00:04:24.535 ], 00:04:24.535 "product_name": "Malloc disk", 00:04:24.535 "block_size": 512, 00:04:24.535 "num_blocks": 16384, 00:04:24.535 "uuid": "be1066fc-ca4f-4913-97e3-f86594d2aa06", 00:04:24.535 "assigned_rate_limits": { 00:04:24.535 "rw_ios_per_sec": 0, 00:04:24.535 "rw_mbytes_per_sec": 0, 00:04:24.535 "r_mbytes_per_sec": 0, 00:04:24.535 "w_mbytes_per_sec": 0 00:04:24.535 }, 00:04:24.535 "claimed": true, 00:04:24.535 "claim_type": "exclusive_write", 00:04:24.535 "zoned": false, 00:04:24.535 "supported_io_types": { 00:04:24.535 "read": true, 00:04:24.535 "write": true, 00:04:24.535 "unmap": true, 00:04:24.535 "flush": true, 00:04:24.535 "reset": true, 00:04:24.535 "nvme_admin": false, 00:04:24.535 "nvme_io": false, 00:04:24.535 "nvme_io_md": false, 00:04:24.535 "write_zeroes": true, 00:04:24.535 "zcopy": true, 00:04:24.535 "get_zone_info": false, 00:04:24.535 "zone_management": false, 00:04:24.535 "zone_append": false, 00:04:24.535 "compare": false, 00:04:24.535 "compare_and_write": false, 00:04:24.535 "abort": true, 00:04:24.535 "seek_hole": false, 00:04:24.535 "seek_data": false, 00:04:24.535 "copy": true, 00:04:24.535 "nvme_iov_md": false 00:04:24.535 }, 00:04:24.535 "memory_domains": [ 00:04:24.535 { 00:04:24.535 "dma_device_id": "system", 00:04:24.535 "dma_device_type": 1 00:04:24.535 }, 00:04:24.535 { 00:04:24.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.535 "dma_device_type": 2 00:04:24.535 } 00:04:24.535 ], 00:04:24.535 "driver_specific": {} 00:04:24.535 }, 00:04:24.535 { 00:04:24.535 "name": "Passthru0", 00:04:24.535 "aliases": [ 00:04:24.535 "7e85c658-d364-532d-9b0e-d6e6f9742b2a" 00:04:24.535 ], 00:04:24.535 "product_name": "passthru", 00:04:24.535 "block_size": 512, 00:04:24.535 "num_blocks": 16384, 00:04:24.535 "uuid": "7e85c658-d364-532d-9b0e-d6e6f9742b2a", 00:04:24.535 "assigned_rate_limits": { 00:04:24.535 "rw_ios_per_sec": 0, 00:04:24.535 "rw_mbytes_per_sec": 0, 00:04:24.535 "r_mbytes_per_sec": 0, 00:04:24.535 "w_mbytes_per_sec": 0 00:04:24.535 }, 00:04:24.535 "claimed": false, 00:04:24.535 "zoned": false, 00:04:24.535 "supported_io_types": { 00:04:24.535 "read": true, 00:04:24.535 "write": true, 00:04:24.535 "unmap": true, 00:04:24.535 "flush": true, 00:04:24.535 "reset": true, 00:04:24.535 "nvme_admin": false, 00:04:24.535 "nvme_io": false, 00:04:24.535 "nvme_io_md": false, 00:04:24.535 "write_zeroes": true, 00:04:24.535 "zcopy": true, 00:04:24.535 "get_zone_info": false, 00:04:24.535 "zone_management": false, 00:04:24.535 "zone_append": false, 00:04:24.535 "compare": false, 00:04:24.535 "compare_and_write": false, 00:04:24.535 "abort": true, 00:04:24.535 "seek_hole": false, 00:04:24.535 "seek_data": false, 00:04:24.535 "copy": true, 00:04:24.535 "nvme_iov_md": false 00:04:24.535 }, 00:04:24.535 "memory_domains": [ 00:04:24.535 { 00:04:24.535 "dma_device_id": "system", 00:04:24.535 "dma_device_type": 1 00:04:24.535 }, 00:04:24.535 { 00:04:24.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.535 "dma_device_type": 2 00:04:24.535 } 00:04:24.535 ], 00:04:24.535 "driver_specific": { 00:04:24.535 "passthru": { 00:04:24.535 "name": "Passthru0", 00:04:24.535 "base_bdev_name": "Malloc2" 00:04:24.535 } 00:04:24.535 } 00:04:24.535 } 00:04:24.535 ]' 00:04:24.535 13:00:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:24.793 13:00:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:24.793 13:00:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:24.793 13:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.793 13:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.793 13:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.793 13:00:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:24.793 13:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.793 13:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.793 13:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.793 13:00:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:24.793 13:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.793 13:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.793 13:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.793 13:00:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:24.793 13:00:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:24.793 13:00:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:24.793 00:04:24.793 real 0m0.313s 00:04:24.793 user 0m0.201s 00:04:24.793 sys 0m0.042s 00:04:24.793 13:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.793 ************************************ 00:04:24.793 13:00:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.793 END TEST rpc_daemon_integrity 00:04:24.793 ************************************ 00:04:24.793 13:00:16 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:24.793 13:00:16 rpc -- rpc/rpc.sh@84 -- # killprocess 58739 00:04:24.793 13:00:16 rpc -- common/autotest_common.sh@950 -- # '[' -z 58739 ']' 00:04:24.793 13:00:16 rpc -- common/autotest_common.sh@954 -- # kill -0 58739 00:04:24.793 13:00:16 rpc -- common/autotest_common.sh@955 -- # uname 00:04:24.793 13:00:16 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:24.793 13:00:16 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58739 00:04:24.793 13:00:16 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:24.793 13:00:16 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:24.793 killing process with pid 58739 00:04:24.793 13:00:16 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58739' 00:04:24.793 13:00:16 rpc -- common/autotest_common.sh@969 -- # kill 58739 00:04:24.793 13:00:16 rpc -- common/autotest_common.sh@974 -- # wait 58739 00:04:25.364 00:04:25.364 real 0m2.846s 00:04:25.364 user 0m3.707s 00:04:25.364 sys 0m0.684s 00:04:25.364 13:00:17 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.364 13:00:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.364 ************************************ 00:04:25.364 END TEST rpc 00:04:25.364 ************************************ 00:04:25.364 13:00:17 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:25.364 13:00:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.364 13:00:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.364 13:00:17 -- common/autotest_common.sh@10 -- # set +x 00:04:25.364 ************************************ 00:04:25.364 START TEST skip_rpc 00:04:25.364 ************************************ 00:04:25.364 13:00:17 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:25.364 * Looking for test storage... 00:04:25.364 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:25.364 13:00:17 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:25.364 13:00:17 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:25.365 13:00:17 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:25.365 13:00:17 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.365 13:00:17 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.365 13:00:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.365 ************************************ 00:04:25.365 START TEST skip_rpc 00:04:25.365 ************************************ 00:04:25.365 13:00:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:25.365 13:00:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58931 00:04:25.365 13:00:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.365 13:00:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:25.365 13:00:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:25.365 [2024-07-25 13:00:17.498095] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:25.365 [2024-07-25 13:00:17.498754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58931 ] 00:04:25.634 [2024-07-25 13:00:17.637996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.634 [2024-07-25 13:00:17.717036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.635 [2024-07-25 13:00:17.771188] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:30.914 13:00:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:30.914 13:00:22 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:30.914 13:00:22 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:30.914 13:00:22 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:30.914 13:00:22 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:30.914 13:00:22 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:30.914 13:00:22 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:30.914 13:00:22 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:30.914 13:00:22 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.914 13:00:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.914 13:00:22 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:30.914 13:00:22 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:30.914 13:00:22 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:30.914 13:00:22 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:30.914 13:00:22 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:30.914 13:00:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:30.914 13:00:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58931 00:04:30.914 13:00:22 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 58931 ']' 00:04:30.914 13:00:22 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 58931 00:04:30.914 13:00:22 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:30.914 13:00:22 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:30.914 13:00:22 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58931 00:04:30.914 13:00:22 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:30.914 13:00:22 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:30.914 killing process with pid 58931 00:04:30.914 13:00:22 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58931' 00:04:30.914 13:00:22 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 58931 00:04:30.914 13:00:22 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 58931 00:04:30.914 00:04:30.914 real 0m5.414s 00:04:30.914 user 0m5.040s 00:04:30.914 sys 0m0.278s 00:04:30.914 13:00:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.914 13:00:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.914 ************************************ 00:04:30.914 END TEST skip_rpc 00:04:30.914 ************************************ 00:04:30.914 13:00:22 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:30.914 13:00:22 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.914 13:00:22 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.914 13:00:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.914 ************************************ 00:04:30.914 START TEST skip_rpc_with_json 00:04:30.914 ************************************ 00:04:30.914 13:00:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:30.914 13:00:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:30.914 13:00:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59018 00:04:30.914 13:00:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:30.914 13:00:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:30.914 13:00:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59018 00:04:30.914 13:00:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 59018 ']' 00:04:30.914 13:00:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.914 13:00:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:30.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.914 13:00:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.914 13:00:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:30.914 13:00:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.914 [2024-07-25 13:00:22.961373] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:30.914 [2024-07-25 13:00:22.961511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59018 ] 00:04:30.914 [2024-07-25 13:00:23.097929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.172 [2024-07-25 13:00:23.184209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.172 [2024-07-25 13:00:23.239964] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:31.737 13:00:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:31.737 13:00:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:31.737 request: 00:04:31.737 { 00:04:31.737 "trtype": "tcp", 00:04:31.737 "method": "nvmf_get_transports", 00:04:31.737 "req_id": 1 00:04:31.737 } 00:04:31.737 Got JSON-RPC error response 00:04:31.738 response: 00:04:31.738 { 00:04:31.738 "code": -19, 00:04:31.738 "message": "No such device" 00:04:31.738 } 00:04:31.738 13:00:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:31.738 13:00:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.738 13:00:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.738 [2024-07-25 13:00:23.845773] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:31.738 13:00:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:31.738 13:00:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:31.738 13:00:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.738 13:00:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.738 [2024-07-25 13:00:23.857910] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:31.738 13:00:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.738 13:00:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:31.738 13:00:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.738 13:00:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.996 13:00:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.996 13:00:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:31.996 { 00:04:31.996 "subsystems": [ 00:04:31.996 { 00:04:31.996 "subsystem": "keyring", 00:04:31.996 "config": [] 00:04:31.996 }, 00:04:31.996 { 00:04:31.996 "subsystem": "iobuf", 00:04:31.996 "config": [ 00:04:31.996 { 00:04:31.996 "method": "iobuf_set_options", 00:04:31.996 "params": { 00:04:31.996 "small_pool_count": 8192, 00:04:31.996 "large_pool_count": 1024, 00:04:31.996 "small_bufsize": 8192, 00:04:31.996 "large_bufsize": 135168 00:04:31.996 } 00:04:31.996 } 00:04:31.996 ] 00:04:31.996 }, 00:04:31.996 { 00:04:31.996 "subsystem": "sock", 00:04:31.996 "config": [ 00:04:31.996 { 00:04:31.996 "method": "sock_set_default_impl", 00:04:31.996 "params": { 00:04:31.996 "impl_name": "uring" 00:04:31.996 } 00:04:31.996 }, 00:04:31.996 { 00:04:31.996 "method": "sock_impl_set_options", 00:04:31.996 "params": { 00:04:31.996 "impl_name": "ssl", 00:04:31.996 "recv_buf_size": 4096, 00:04:31.996 "send_buf_size": 4096, 00:04:31.996 "enable_recv_pipe": true, 00:04:31.996 "enable_quickack": false, 00:04:31.996 "enable_placement_id": 0, 00:04:31.996 "enable_zerocopy_send_server": true, 00:04:31.996 "enable_zerocopy_send_client": false, 00:04:31.996 "zerocopy_threshold": 0, 00:04:31.996 "tls_version": 0, 00:04:31.996 "enable_ktls": false 00:04:31.996 } 00:04:31.996 }, 00:04:31.996 { 00:04:31.996 "method": "sock_impl_set_options", 00:04:31.996 "params": { 00:04:31.996 "impl_name": "posix", 00:04:31.996 "recv_buf_size": 2097152, 00:04:31.996 "send_buf_size": 2097152, 00:04:31.996 "enable_recv_pipe": true, 00:04:31.996 "enable_quickack": false, 00:04:31.996 "enable_placement_id": 0, 00:04:31.996 "enable_zerocopy_send_server": true, 00:04:31.996 "enable_zerocopy_send_client": false, 00:04:31.996 "zerocopy_threshold": 0, 00:04:31.996 "tls_version": 0, 00:04:31.996 "enable_ktls": false 00:04:31.996 } 00:04:31.996 }, 00:04:31.996 { 00:04:31.996 "method": "sock_impl_set_options", 00:04:31.996 "params": { 00:04:31.996 "impl_name": "uring", 00:04:31.996 "recv_buf_size": 2097152, 00:04:31.996 "send_buf_size": 2097152, 00:04:31.996 "enable_recv_pipe": true, 00:04:31.996 "enable_quickack": false, 00:04:31.996 "enable_placement_id": 0, 00:04:31.996 "enable_zerocopy_send_server": false, 00:04:31.996 "enable_zerocopy_send_client": false, 00:04:31.996 "zerocopy_threshold": 0, 00:04:31.996 "tls_version": 0, 00:04:31.996 "enable_ktls": false 00:04:31.996 } 00:04:31.996 } 00:04:31.996 ] 00:04:31.996 }, 00:04:31.996 { 00:04:31.996 "subsystem": "vmd", 00:04:31.996 "config": [] 00:04:31.996 }, 00:04:31.996 { 00:04:31.996 "subsystem": "accel", 00:04:31.996 "config": [ 00:04:31.996 { 00:04:31.996 "method": "accel_set_options", 00:04:31.996 "params": { 00:04:31.996 "small_cache_size": 128, 00:04:31.996 "large_cache_size": 16, 00:04:31.996 "task_count": 2048, 00:04:31.996 "sequence_count": 2048, 00:04:31.997 "buf_count": 2048 00:04:31.997 } 00:04:31.997 } 00:04:31.997 ] 00:04:31.997 }, 00:04:31.997 { 00:04:31.997 "subsystem": "bdev", 00:04:31.997 "config": [ 00:04:31.997 { 00:04:31.997 "method": "bdev_set_options", 00:04:31.997 "params": { 00:04:31.997 "bdev_io_pool_size": 65535, 00:04:31.997 "bdev_io_cache_size": 256, 00:04:31.997 "bdev_auto_examine": true, 00:04:31.997 "iobuf_small_cache_size": 128, 00:04:31.997 "iobuf_large_cache_size": 16 00:04:31.997 } 00:04:31.997 }, 00:04:31.997 { 00:04:31.997 "method": "bdev_raid_set_options", 00:04:31.997 "params": { 00:04:31.997 "process_window_size_kb": 1024, 00:04:31.997 "process_max_bandwidth_mb_sec": 0 00:04:31.997 } 00:04:31.997 }, 00:04:31.997 { 00:04:31.997 "method": "bdev_iscsi_set_options", 00:04:31.997 "params": { 00:04:31.997 "timeout_sec": 30 00:04:31.997 } 00:04:31.997 }, 00:04:31.997 { 00:04:31.997 "method": "bdev_nvme_set_options", 00:04:31.997 "params": { 00:04:31.997 "action_on_timeout": "none", 00:04:31.997 "timeout_us": 0, 00:04:31.997 "timeout_admin_us": 0, 00:04:31.997 "keep_alive_timeout_ms": 10000, 00:04:31.997 "arbitration_burst": 0, 00:04:31.997 "low_priority_weight": 0, 00:04:31.997 "medium_priority_weight": 0, 00:04:31.997 "high_priority_weight": 0, 00:04:31.997 "nvme_adminq_poll_period_us": 10000, 00:04:31.997 "nvme_ioq_poll_period_us": 0, 00:04:31.997 "io_queue_requests": 0, 00:04:31.997 "delay_cmd_submit": true, 00:04:31.997 "transport_retry_count": 4, 00:04:31.997 "bdev_retry_count": 3, 00:04:31.997 "transport_ack_timeout": 0, 00:04:31.997 "ctrlr_loss_timeout_sec": 0, 00:04:31.997 "reconnect_delay_sec": 0, 00:04:31.997 "fast_io_fail_timeout_sec": 0, 00:04:31.997 "disable_auto_failback": false, 00:04:31.997 "generate_uuids": false, 00:04:31.997 "transport_tos": 0, 00:04:31.997 "nvme_error_stat": false, 00:04:31.997 "rdma_srq_size": 0, 00:04:31.997 "io_path_stat": false, 00:04:31.997 "allow_accel_sequence": false, 00:04:31.997 "rdma_max_cq_size": 0, 00:04:31.997 "rdma_cm_event_timeout_ms": 0, 00:04:31.997 "dhchap_digests": [ 00:04:31.997 "sha256", 00:04:31.997 "sha384", 00:04:31.997 "sha512" 00:04:31.997 ], 00:04:31.997 "dhchap_dhgroups": [ 00:04:31.997 "null", 00:04:31.997 "ffdhe2048", 00:04:31.997 "ffdhe3072", 00:04:31.997 "ffdhe4096", 00:04:31.997 "ffdhe6144", 00:04:31.997 "ffdhe8192" 00:04:31.997 ] 00:04:31.997 } 00:04:31.997 }, 00:04:31.997 { 00:04:31.997 "method": "bdev_nvme_set_hotplug", 00:04:31.997 "params": { 00:04:31.997 "period_us": 100000, 00:04:31.997 "enable": false 00:04:31.997 } 00:04:31.997 }, 00:04:31.997 { 00:04:31.997 "method": "bdev_wait_for_examine" 00:04:31.997 } 00:04:31.997 ] 00:04:31.997 }, 00:04:31.997 { 00:04:31.997 "subsystem": "scsi", 00:04:31.997 "config": null 00:04:31.997 }, 00:04:31.997 { 00:04:31.997 "subsystem": "scheduler", 00:04:31.997 "config": [ 00:04:31.997 { 00:04:31.997 "method": "framework_set_scheduler", 00:04:31.997 "params": { 00:04:31.997 "name": "static" 00:04:31.997 } 00:04:31.997 } 00:04:31.997 ] 00:04:31.997 }, 00:04:31.997 { 00:04:31.997 "subsystem": "vhost_scsi", 00:04:31.997 "config": [] 00:04:31.997 }, 00:04:31.997 { 00:04:31.997 "subsystem": "vhost_blk", 00:04:31.997 "config": [] 00:04:31.997 }, 00:04:31.997 { 00:04:31.997 "subsystem": "ublk", 00:04:31.997 "config": [] 00:04:31.997 }, 00:04:31.997 { 00:04:31.997 "subsystem": "nbd", 00:04:31.997 "config": [] 00:04:31.997 }, 00:04:31.997 { 00:04:31.997 "subsystem": "nvmf", 00:04:31.997 "config": [ 00:04:31.997 { 00:04:31.997 "method": "nvmf_set_config", 00:04:31.997 "params": { 00:04:31.997 "discovery_filter": "match_any", 00:04:31.997 "admin_cmd_passthru": { 00:04:31.997 "identify_ctrlr": false 00:04:31.997 } 00:04:31.997 } 00:04:31.997 }, 00:04:31.997 { 00:04:31.997 "method": "nvmf_set_max_subsystems", 00:04:31.997 "params": { 00:04:31.997 "max_subsystems": 1024 00:04:31.997 } 00:04:31.997 }, 00:04:31.997 { 00:04:31.997 "method": "nvmf_set_crdt", 00:04:31.997 "params": { 00:04:31.997 "crdt1": 0, 00:04:31.997 "crdt2": 0, 00:04:31.997 "crdt3": 0 00:04:31.997 } 00:04:31.997 }, 00:04:31.997 { 00:04:31.997 "method": "nvmf_create_transport", 00:04:31.997 "params": { 00:04:31.997 "trtype": "TCP", 00:04:31.997 "max_queue_depth": 128, 00:04:31.997 "max_io_qpairs_per_ctrlr": 127, 00:04:31.997 "in_capsule_data_size": 4096, 00:04:31.997 "max_io_size": 131072, 00:04:31.997 "io_unit_size": 131072, 00:04:31.997 "max_aq_depth": 128, 00:04:31.997 "num_shared_buffers": 511, 00:04:31.997 "buf_cache_size": 4294967295, 00:04:31.997 "dif_insert_or_strip": false, 00:04:31.997 "zcopy": false, 00:04:31.997 "c2h_success": true, 00:04:31.997 "sock_priority": 0, 00:04:31.997 "abort_timeout_sec": 1, 00:04:31.997 "ack_timeout": 0, 00:04:31.997 "data_wr_pool_size": 0 00:04:31.997 } 00:04:31.997 } 00:04:31.997 ] 00:04:31.997 }, 00:04:31.997 { 00:04:31.997 "subsystem": "iscsi", 00:04:31.997 "config": [ 00:04:31.997 { 00:04:31.997 "method": "iscsi_set_options", 00:04:31.997 "params": { 00:04:31.997 "node_base": "iqn.2016-06.io.spdk", 00:04:31.997 "max_sessions": 128, 00:04:31.997 "max_connections_per_session": 2, 00:04:31.997 "max_queue_depth": 64, 00:04:31.997 "default_time2wait": 2, 00:04:31.997 "default_time2retain": 20, 00:04:31.997 "first_burst_length": 8192, 00:04:31.997 "immediate_data": true, 00:04:31.997 "allow_duplicated_isid": false, 00:04:31.997 "error_recovery_level": 0, 00:04:31.997 "nop_timeout": 60, 00:04:31.997 "nop_in_interval": 30, 00:04:31.997 "disable_chap": false, 00:04:31.997 "require_chap": false, 00:04:31.997 "mutual_chap": false, 00:04:31.997 "chap_group": 0, 00:04:31.997 "max_large_datain_per_connection": 64, 00:04:31.997 "max_r2t_per_connection": 4, 00:04:31.997 "pdu_pool_size": 36864, 00:04:31.997 "immediate_data_pool_size": 16384, 00:04:31.997 "data_out_pool_size": 2048 00:04:31.997 } 00:04:31.997 } 00:04:31.997 ] 00:04:31.997 } 00:04:31.997 ] 00:04:31.997 } 00:04:31.997 13:00:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:31.997 13:00:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59018 00:04:31.997 13:00:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 59018 ']' 00:04:31.997 13:00:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 59018 00:04:31.997 13:00:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:31.997 13:00:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:31.997 13:00:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59018 00:04:31.997 13:00:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:31.997 13:00:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:31.997 killing process with pid 59018 00:04:31.997 13:00:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59018' 00:04:31.997 13:00:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 59018 00:04:31.997 13:00:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 59018 00:04:32.255 13:00:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59040 00:04:32.255 13:00:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:32.255 13:00:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:37.520 13:00:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59040 00:04:37.520 13:00:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 59040 ']' 00:04:37.520 13:00:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 59040 00:04:37.520 13:00:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:37.520 13:00:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:37.520 13:00:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59040 00:04:37.520 13:00:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:37.520 13:00:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:37.520 killing process with pid 59040 00:04:37.520 13:00:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59040' 00:04:37.520 13:00:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 59040 00:04:37.520 13:00:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 59040 00:04:37.779 13:00:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:37.779 13:00:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:37.779 00:04:37.779 real 0m6.954s 00:04:37.779 user 0m6.602s 00:04:37.779 sys 0m0.648s 00:04:37.779 13:00:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.779 13:00:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:37.779 ************************************ 00:04:37.779 END TEST skip_rpc_with_json 00:04:37.779 ************************************ 00:04:37.779 13:00:29 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:37.779 13:00:29 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.779 13:00:29 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.779 13:00:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.779 ************************************ 00:04:37.779 START TEST skip_rpc_with_delay 00:04:37.779 ************************************ 00:04:37.779 13:00:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:37.779 13:00:29 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:37.779 13:00:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:37.779 13:00:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:37.779 13:00:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.779 13:00:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:37.779 13:00:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.779 13:00:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:37.779 13:00:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.779 13:00:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:37.779 13:00:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.779 13:00:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:37.779 13:00:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:38.039 [2024-07-25 13:00:29.971560] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:38.039 [2024-07-25 13:00:29.971704] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:38.039 13:00:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:38.039 13:00:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:38.039 13:00:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:38.039 13:00:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:38.039 00:04:38.039 real 0m0.089s 00:04:38.039 user 0m0.053s 00:04:38.039 sys 0m0.032s 00:04:38.039 13:00:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.039 13:00:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:38.039 ************************************ 00:04:38.039 END TEST skip_rpc_with_delay 00:04:38.039 ************************************ 00:04:38.039 13:00:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:38.039 13:00:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:38.039 13:00:30 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:38.039 13:00:30 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.039 13:00:30 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.039 13:00:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.039 ************************************ 00:04:38.039 START TEST exit_on_failed_rpc_init 00:04:38.039 ************************************ 00:04:38.039 13:00:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:38.039 13:00:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59155 00:04:38.039 13:00:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59155 00:04:38.039 13:00:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:38.039 13:00:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 59155 ']' 00:04:38.039 13:00:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.039 13:00:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:38.039 13:00:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.039 13:00:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:38.039 13:00:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:38.039 [2024-07-25 13:00:30.111421] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:38.039 [2024-07-25 13:00:30.111524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59155 ] 00:04:38.298 [2024-07-25 13:00:30.250834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.298 [2024-07-25 13:00:30.348369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.298 [2024-07-25 13:00:30.403494] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:39.232 13:00:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:39.232 13:00:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:39.232 13:00:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.232 13:00:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:39.232 13:00:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:39.232 13:00:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:39.232 13:00:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.232 13:00:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:39.232 13:00:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.232 13:00:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:39.232 13:00:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.232 13:00:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:39.232 13:00:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.232 13:00:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:39.232 13:00:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:39.232 [2024-07-25 13:00:31.169227] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:39.232 [2024-07-25 13:00:31.169324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59173 ] 00:04:39.232 [2024-07-25 13:00:31.307744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.232 [2024-07-25 13:00:31.409867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.232 [2024-07-25 13:00:31.409990] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:39.232 [2024-07-25 13:00:31.410006] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:39.232 [2024-07-25 13:00:31.410015] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:39.490 13:00:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:39.490 13:00:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:39.490 13:00:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:39.490 13:00:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:39.490 13:00:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:39.490 13:00:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:39.491 13:00:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:39.491 13:00:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59155 00:04:39.491 13:00:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 59155 ']' 00:04:39.491 13:00:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 59155 00:04:39.491 13:00:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:39.491 13:00:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:39.491 13:00:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59155 00:04:39.491 killing process with pid 59155 00:04:39.491 13:00:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:39.491 13:00:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:39.491 13:00:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59155' 00:04:39.491 13:00:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 59155 00:04:39.491 13:00:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 59155 00:04:40.056 00:04:40.056 real 0m1.897s 00:04:40.056 user 0m2.236s 00:04:40.056 sys 0m0.421s 00:04:40.056 13:00:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.056 ************************************ 00:04:40.056 END TEST exit_on_failed_rpc_init 00:04:40.056 ************************************ 00:04:40.056 13:00:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:40.056 13:00:31 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:40.056 00:04:40.056 real 0m14.653s 00:04:40.056 user 0m14.029s 00:04:40.056 sys 0m1.560s 00:04:40.056 13:00:31 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.056 ************************************ 00:04:40.056 END TEST skip_rpc 00:04:40.056 13:00:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.056 ************************************ 00:04:40.056 13:00:32 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:40.056 13:00:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.056 13:00:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.056 13:00:32 -- common/autotest_common.sh@10 -- # set +x 00:04:40.056 ************************************ 00:04:40.056 START TEST rpc_client 00:04:40.056 ************************************ 00:04:40.056 13:00:32 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:40.056 * Looking for test storage... 00:04:40.056 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:40.056 13:00:32 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:40.056 OK 00:04:40.056 13:00:32 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:40.056 00:04:40.056 real 0m0.106s 00:04:40.056 user 0m0.048s 00:04:40.056 sys 0m0.064s 00:04:40.056 ************************************ 00:04:40.056 END TEST rpc_client 00:04:40.056 ************************************ 00:04:40.056 13:00:32 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.056 13:00:32 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:40.056 13:00:32 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:40.056 13:00:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.056 13:00:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.056 13:00:32 -- common/autotest_common.sh@10 -- # set +x 00:04:40.056 ************************************ 00:04:40.056 START TEST json_config 00:04:40.056 ************************************ 00:04:40.056 13:00:32 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:40.315 13:00:32 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:40.315 13:00:32 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:40.315 13:00:32 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:40.315 13:00:32 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:40.315 13:00:32 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:40.315 13:00:32 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:40.315 13:00:32 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:40.315 13:00:32 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:40.315 13:00:32 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:40.315 13:00:32 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:40.315 13:00:32 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:40.315 13:00:32 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:40.315 13:00:32 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:04:40.315 13:00:32 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:04:40.315 13:00:32 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:40.315 13:00:32 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:40.315 13:00:32 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:40.315 13:00:32 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:40.315 13:00:32 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:40.315 13:00:32 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:40.315 13:00:32 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:40.315 13:00:32 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:40.315 13:00:32 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.315 13:00:32 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.315 13:00:32 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.315 13:00:32 json_config -- paths/export.sh@5 -- # export PATH 00:04:40.315 13:00:32 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.315 13:00:32 json_config -- nvmf/common.sh@47 -- # : 0 00:04:40.316 13:00:32 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:40.316 13:00:32 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:40.316 13:00:32 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:40.316 13:00:32 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:40.316 13:00:32 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:40.316 13:00:32 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:40.316 13:00:32 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:40.316 13:00:32 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:40.316 13:00:32 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:40.316 13:00:32 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:40.316 13:00:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:40.316 13:00:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:40.316 13:00:32 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:40.316 13:00:32 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:40.316 INFO: JSON configuration test init 00:04:40.316 13:00:32 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:40.316 13:00:32 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:40.316 13:00:32 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:40.316 13:00:32 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:40.316 13:00:32 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:40.316 13:00:32 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:40.316 13:00:32 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:40.316 13:00:32 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:40.316 13:00:32 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:40.316 13:00:32 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:04:40.316 13:00:32 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:04:40.316 13:00:32 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:04:40.316 13:00:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:40.316 13:00:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.316 13:00:32 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:04:40.316 13:00:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:40.316 13:00:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.316 13:00:32 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:04:40.316 13:00:32 json_config -- json_config/common.sh@9 -- # local app=target 00:04:40.316 13:00:32 json_config -- json_config/common.sh@10 -- # shift 00:04:40.316 13:00:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:40.316 13:00:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:40.316 13:00:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:40.316 13:00:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:40.316 13:00:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:40.316 13:00:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59291 00:04:40.316 13:00:32 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:40.316 13:00:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:40.316 Waiting for target to run... 00:04:40.316 13:00:32 json_config -- json_config/common.sh@25 -- # waitforlisten 59291 /var/tmp/spdk_tgt.sock 00:04:40.316 13:00:32 json_config -- common/autotest_common.sh@831 -- # '[' -z 59291 ']' 00:04:40.316 13:00:32 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:40.316 13:00:32 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:40.316 13:00:32 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:40.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:40.316 13:00:32 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:40.316 13:00:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.316 [2024-07-25 13:00:32.364597] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:40.316 [2024-07-25 13:00:32.365243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59291 ] 00:04:40.882 [2024-07-25 13:00:32.817812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.882 [2024-07-25 13:00:32.909586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.139 00:04:41.139 13:00:33 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:41.139 13:00:33 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:41.139 13:00:33 json_config -- json_config/common.sh@26 -- # echo '' 00:04:41.139 13:00:33 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:04:41.139 13:00:33 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:04:41.139 13:00:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:41.139 13:00:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.139 13:00:33 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:04:41.139 13:00:33 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:04:41.139 13:00:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:41.139 13:00:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.139 13:00:33 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:41.139 13:00:33 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:04:41.139 13:00:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:41.705 [2024-07-25 13:00:33.613512] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:41.705 13:00:33 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:04:41.705 13:00:33 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:41.705 13:00:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:41.705 13:00:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.705 13:00:33 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:41.705 13:00:33 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:41.705 13:00:33 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:41.705 13:00:33 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:41.705 13:00:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:41.705 13:00:33 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:41.963 13:00:34 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:41.964 13:00:34 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:41.964 13:00:34 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:04:41.964 13:00:34 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:04:41.964 13:00:34 json_config -- json_config/json_config.sh@51 -- # sort 00:04:41.964 13:00:34 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:04:41.964 13:00:34 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:04:41.964 13:00:34 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:04:41.964 13:00:34 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:04:41.964 13:00:34 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:04:41.964 13:00:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:41.964 13:00:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.964 13:00:34 json_config -- json_config/json_config.sh@59 -- # return 0 00:04:41.964 13:00:34 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:41.964 13:00:34 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:41.964 13:00:34 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:04:41.964 13:00:34 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:04:41.964 13:00:34 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:04:41.964 13:00:34 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:04:41.964 13:00:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:41.964 13:00:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.964 13:00:34 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:41.964 13:00:34 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:04:41.964 13:00:34 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:04:41.964 13:00:34 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:41.964 13:00:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:42.223 MallocForNvmf0 00:04:42.223 13:00:34 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:42.223 13:00:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:42.480 MallocForNvmf1 00:04:42.480 13:00:34 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:42.480 13:00:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:42.737 [2024-07-25 13:00:34.807071] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:42.737 13:00:34 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:42.737 13:00:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:42.995 13:00:35 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:42.995 13:00:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:43.252 13:00:35 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:43.252 13:00:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:43.511 13:00:35 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:43.511 13:00:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:43.770 [2024-07-25 13:00:35.727529] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:43.770 13:00:35 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:04:43.770 13:00:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:43.770 13:00:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.770 13:00:35 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:04:43.770 13:00:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:43.770 13:00:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.770 13:00:35 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:04:43.770 13:00:35 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:43.770 13:00:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:44.028 MallocBdevForConfigChangeCheck 00:04:44.028 13:00:36 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:04:44.028 13:00:36 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:44.028 13:00:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.028 13:00:36 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:04:44.028 13:00:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:44.592 INFO: shutting down applications... 00:04:44.592 13:00:36 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:04:44.592 13:00:36 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:04:44.592 13:00:36 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:04:44.592 13:00:36 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:04:44.592 13:00:36 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:44.848 Calling clear_iscsi_subsystem 00:04:44.848 Calling clear_nvmf_subsystem 00:04:44.848 Calling clear_nbd_subsystem 00:04:44.848 Calling clear_ublk_subsystem 00:04:44.848 Calling clear_vhost_blk_subsystem 00:04:44.848 Calling clear_vhost_scsi_subsystem 00:04:44.848 Calling clear_bdev_subsystem 00:04:44.848 13:00:36 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:44.848 13:00:36 json_config -- json_config/json_config.sh@347 -- # count=100 00:04:44.848 13:00:36 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:04:44.848 13:00:36 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:44.848 13:00:36 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:44.848 13:00:36 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:45.105 13:00:37 json_config -- json_config/json_config.sh@349 -- # break 00:04:45.105 13:00:37 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:04:45.362 13:00:37 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:04:45.362 13:00:37 json_config -- json_config/common.sh@31 -- # local app=target 00:04:45.362 13:00:37 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:45.362 13:00:37 json_config -- json_config/common.sh@35 -- # [[ -n 59291 ]] 00:04:45.362 13:00:37 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59291 00:04:45.362 13:00:37 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:45.362 13:00:37 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.362 13:00:37 json_config -- json_config/common.sh@41 -- # kill -0 59291 00:04:45.362 13:00:37 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:45.620 13:00:37 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:45.620 13:00:37 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.620 13:00:37 json_config -- json_config/common.sh@41 -- # kill -0 59291 00:04:45.620 13:00:37 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:45.620 13:00:37 json_config -- json_config/common.sh@43 -- # break 00:04:45.620 13:00:37 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:45.620 SPDK target shutdown done 00:04:45.620 13:00:37 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:45.620 INFO: relaunching applications... 00:04:45.620 13:00:37 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:04:45.620 13:00:37 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:45.620 13:00:37 json_config -- json_config/common.sh@9 -- # local app=target 00:04:45.620 13:00:37 json_config -- json_config/common.sh@10 -- # shift 00:04:45.620 13:00:37 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:45.620 13:00:37 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:45.620 13:00:37 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:45.620 13:00:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:45.620 13:00:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:45.620 Waiting for target to run... 00:04:45.620 13:00:37 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59481 00:04:45.620 13:00:37 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:45.620 13:00:37 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:45.621 13:00:37 json_config -- json_config/common.sh@25 -- # waitforlisten 59481 /var/tmp/spdk_tgt.sock 00:04:45.621 13:00:37 json_config -- common/autotest_common.sh@831 -- # '[' -z 59481 ']' 00:04:45.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:45.935 13:00:37 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:45.935 13:00:37 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:45.935 13:00:37 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:45.935 13:00:37 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:45.935 13:00:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.935 [2024-07-25 13:00:37.869059] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:45.935 [2024-07-25 13:00:37.869142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59481 ] 00:04:46.193 [2024-07-25 13:00:38.285403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.193 [2024-07-25 13:00:38.365260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.461 [2024-07-25 13:00:38.491158] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:46.751 [2024-07-25 13:00:38.693976] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:46.751 [2024-07-25 13:00:38.726049] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:46.751 00:04:46.751 INFO: Checking if target configuration is the same... 00:04:46.751 13:00:38 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:46.751 13:00:38 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:46.751 13:00:38 json_config -- json_config/common.sh@26 -- # echo '' 00:04:46.751 13:00:38 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:04:46.751 13:00:38 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:46.751 13:00:38 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:04:46.751 13:00:38 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:46.751 13:00:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:46.751 + '[' 2 -ne 2 ']' 00:04:46.751 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:46.751 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:46.751 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:46.751 +++ basename /dev/fd/62 00:04:46.751 ++ mktemp /tmp/62.XXX 00:04:46.751 + tmp_file_1=/tmp/62.SFT 00:04:46.751 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:46.751 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:46.751 + tmp_file_2=/tmp/spdk_tgt_config.json.aaY 00:04:46.751 + ret=0 00:04:46.751 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:47.010 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:47.268 + diff -u /tmp/62.SFT /tmp/spdk_tgt_config.json.aaY 00:04:47.268 INFO: JSON config files are the same 00:04:47.268 + echo 'INFO: JSON config files are the same' 00:04:47.268 + rm /tmp/62.SFT /tmp/spdk_tgt_config.json.aaY 00:04:47.268 + exit 0 00:04:47.268 INFO: changing configuration and checking if this can be detected... 00:04:47.268 13:00:39 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:04:47.268 13:00:39 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:47.268 13:00:39 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:47.268 13:00:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:47.525 13:00:39 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:47.525 13:00:39 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:04:47.525 13:00:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:47.525 + '[' 2 -ne 2 ']' 00:04:47.525 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:47.525 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:47.525 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:47.525 +++ basename /dev/fd/62 00:04:47.525 ++ mktemp /tmp/62.XXX 00:04:47.525 + tmp_file_1=/tmp/62.A89 00:04:47.525 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:47.525 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:47.525 + tmp_file_2=/tmp/spdk_tgt_config.json.Npq 00:04:47.525 + ret=0 00:04:47.525 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:47.784 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:47.784 + diff -u /tmp/62.A89 /tmp/spdk_tgt_config.json.Npq 00:04:47.784 + ret=1 00:04:47.784 + echo '=== Start of file: /tmp/62.A89 ===' 00:04:47.784 + cat /tmp/62.A89 00:04:47.784 + echo '=== End of file: /tmp/62.A89 ===' 00:04:47.784 + echo '' 00:04:47.784 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Npq ===' 00:04:47.784 + cat /tmp/spdk_tgt_config.json.Npq 00:04:47.784 + echo '=== End of file: /tmp/spdk_tgt_config.json.Npq ===' 00:04:47.784 + echo '' 00:04:47.784 + rm /tmp/62.A89 /tmp/spdk_tgt_config.json.Npq 00:04:47.784 + exit 1 00:04:47.784 INFO: configuration change detected. 00:04:47.784 13:00:39 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:04:47.784 13:00:39 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:04:47.784 13:00:39 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:04:47.784 13:00:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:47.784 13:00:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.784 13:00:39 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:04:47.784 13:00:39 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:04:47.784 13:00:39 json_config -- json_config/json_config.sh@321 -- # [[ -n 59481 ]] 00:04:47.784 13:00:39 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:04:47.784 13:00:39 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:04:47.784 13:00:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:47.784 13:00:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.784 13:00:39 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:04:47.784 13:00:39 json_config -- json_config/json_config.sh@197 -- # uname -s 00:04:48.042 13:00:39 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:04:48.042 13:00:39 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:04:48.042 13:00:39 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:04:48.042 13:00:39 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:04:48.042 13:00:39 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:48.042 13:00:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.042 13:00:40 json_config -- json_config/json_config.sh@327 -- # killprocess 59481 00:04:48.042 13:00:40 json_config -- common/autotest_common.sh@950 -- # '[' -z 59481 ']' 00:04:48.042 13:00:40 json_config -- common/autotest_common.sh@954 -- # kill -0 59481 00:04:48.042 13:00:40 json_config -- common/autotest_common.sh@955 -- # uname 00:04:48.042 13:00:40 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:48.042 13:00:40 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59481 00:04:48.042 killing process with pid 59481 00:04:48.042 13:00:40 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:48.042 13:00:40 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:48.042 13:00:40 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59481' 00:04:48.042 13:00:40 json_config -- common/autotest_common.sh@969 -- # kill 59481 00:04:48.042 13:00:40 json_config -- common/autotest_common.sh@974 -- # wait 59481 00:04:48.301 13:00:40 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:48.301 13:00:40 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:04:48.301 13:00:40 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:48.301 13:00:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.301 INFO: Success 00:04:48.301 13:00:40 json_config -- json_config/json_config.sh@332 -- # return 0 00:04:48.301 13:00:40 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:04:48.301 ************************************ 00:04:48.301 END TEST json_config 00:04:48.301 ************************************ 00:04:48.301 00:04:48.301 real 0m8.140s 00:04:48.301 user 0m11.436s 00:04:48.301 sys 0m1.780s 00:04:48.301 13:00:40 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:48.301 13:00:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.301 13:00:40 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:48.301 13:00:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:48.301 13:00:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:48.301 13:00:40 -- common/autotest_common.sh@10 -- # set +x 00:04:48.301 ************************************ 00:04:48.301 START TEST json_config_extra_key 00:04:48.301 ************************************ 00:04:48.301 13:00:40 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:48.301 13:00:40 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:48.301 13:00:40 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:48.301 13:00:40 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:48.301 13:00:40 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:48.301 13:00:40 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:48.301 13:00:40 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:48.301 13:00:40 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:48.301 13:00:40 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:48.301 13:00:40 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:48.301 13:00:40 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:48.301 13:00:40 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:48.301 13:00:40 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:48.301 13:00:40 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:04:48.301 13:00:40 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:04:48.301 13:00:40 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:48.301 13:00:40 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:48.301 13:00:40 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:48.301 13:00:40 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:48.301 13:00:40 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:48.301 13:00:40 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:48.301 13:00:40 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:48.301 13:00:40 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:48.301 13:00:40 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.301 13:00:40 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.301 13:00:40 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.301 13:00:40 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:48.301 13:00:40 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.301 13:00:40 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:48.301 13:00:40 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:48.301 13:00:40 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:48.301 13:00:40 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:48.301 13:00:40 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:48.301 13:00:40 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:48.301 13:00:40 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:48.301 13:00:40 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:48.301 13:00:40 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:48.301 13:00:40 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:48.301 13:00:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:48.301 13:00:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:48.301 13:00:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:48.301 13:00:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:48.301 13:00:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:48.301 13:00:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:48.301 13:00:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:48.301 13:00:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:48.301 INFO: launching applications... 00:04:48.301 13:00:40 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:48.301 13:00:40 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:48.301 13:00:40 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:48.301 13:00:40 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:48.301 13:00:40 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:48.301 13:00:40 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:48.301 13:00:40 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:48.301 13:00:40 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:48.301 13:00:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:48.301 13:00:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:48.301 13:00:40 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59627 00:04:48.301 13:00:40 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:48.301 Waiting for target to run... 00:04:48.301 13:00:40 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:48.301 13:00:40 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59627 /var/tmp/spdk_tgt.sock 00:04:48.301 13:00:40 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 59627 ']' 00:04:48.301 13:00:40 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:48.301 13:00:40 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:48.301 13:00:40 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:48.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:48.301 13:00:40 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:48.301 13:00:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:48.573 [2024-07-25 13:00:40.537255] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:48.573 [2024-07-25 13:00:40.537655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59627 ] 00:04:48.831 [2024-07-25 13:00:40.969829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.089 [2024-07-25 13:00:41.058287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.089 [2024-07-25 13:00:41.078906] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:49.347 00:04:49.347 INFO: shutting down applications... 00:04:49.347 13:00:41 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:49.347 13:00:41 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:49.347 13:00:41 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:49.347 13:00:41 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:49.347 13:00:41 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:49.347 13:00:41 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:49.347 13:00:41 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:49.347 13:00:41 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59627 ]] 00:04:49.347 13:00:41 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59627 00:04:49.347 13:00:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:49.347 13:00:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:49.347 13:00:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59627 00:04:49.347 13:00:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:49.917 13:00:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:49.917 13:00:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:49.917 13:00:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59627 00:04:49.917 13:00:41 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:49.917 13:00:41 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:49.917 13:00:41 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:49.917 SPDK target shutdown done 00:04:49.917 13:00:41 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:49.917 Success 00:04:49.917 13:00:41 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:49.917 ************************************ 00:04:49.917 END TEST json_config_extra_key 00:04:49.917 ************************************ 00:04:49.917 00:04:49.917 real 0m1.568s 00:04:49.917 user 0m1.442s 00:04:49.918 sys 0m0.427s 00:04:49.918 13:00:41 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.918 13:00:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:49.918 13:00:41 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:49.918 13:00:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.918 13:00:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.918 13:00:41 -- common/autotest_common.sh@10 -- # set +x 00:04:49.918 ************************************ 00:04:49.918 START TEST alias_rpc 00:04:49.918 ************************************ 00:04:49.918 13:00:42 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:49.918 * Looking for test storage... 00:04:49.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:49.918 13:00:42 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:49.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.918 13:00:42 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59692 00:04:49.918 13:00:42 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59692 00:04:49.918 13:00:42 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.918 13:00:42 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 59692 ']' 00:04:49.918 13:00:42 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.918 13:00:42 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:49.918 13:00:42 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.918 13:00:42 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:49.918 13:00:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.175 [2024-07-25 13:00:42.156610] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:50.175 [2024-07-25 13:00:42.156999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59692 ] 00:04:50.175 [2024-07-25 13:00:42.294989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.432 [2024-07-25 13:00:42.411312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.432 [2024-07-25 13:00:42.467380] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:50.998 13:00:43 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:50.998 13:00:43 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:50.998 13:00:43 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:51.256 13:00:43 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59692 00:04:51.256 13:00:43 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 59692 ']' 00:04:51.256 13:00:43 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 59692 00:04:51.256 13:00:43 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:51.256 13:00:43 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:51.256 13:00:43 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59692 00:04:51.256 killing process with pid 59692 00:04:51.256 13:00:43 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:51.256 13:00:43 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:51.256 13:00:43 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59692' 00:04:51.256 13:00:43 alias_rpc -- common/autotest_common.sh@969 -- # kill 59692 00:04:51.256 13:00:43 alias_rpc -- common/autotest_common.sh@974 -- # wait 59692 00:04:51.821 ************************************ 00:04:51.821 END TEST alias_rpc 00:04:51.821 ************************************ 00:04:51.821 00:04:51.821 real 0m1.800s 00:04:51.821 user 0m2.061s 00:04:51.821 sys 0m0.418s 00:04:51.821 13:00:43 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.821 13:00:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.821 13:00:43 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:51.821 13:00:43 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:51.821 13:00:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:51.821 13:00:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:51.821 13:00:43 -- common/autotest_common.sh@10 -- # set +x 00:04:51.821 ************************************ 00:04:51.821 START TEST spdkcli_tcp 00:04:51.821 ************************************ 00:04:51.821 13:00:43 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:51.821 * Looking for test storage... 00:04:51.821 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:51.821 13:00:43 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:51.821 13:00:43 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:51.821 13:00:43 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:51.821 13:00:43 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:51.821 13:00:43 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:51.821 13:00:43 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:51.821 13:00:43 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:51.821 13:00:43 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:51.821 13:00:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:51.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.821 13:00:43 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59762 00:04:51.821 13:00:43 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59762 00:04:51.821 13:00:43 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:51.821 13:00:43 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 59762 ']' 00:04:51.821 13:00:43 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.821 13:00:43 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:51.821 13:00:43 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.821 13:00:43 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:51.821 13:00:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:51.821 [2024-07-25 13:00:44.007347] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:51.821 [2024-07-25 13:00:44.007438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59762 ] 00:04:52.078 [2024-07-25 13:00:44.143008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:52.078 [2024-07-25 13:00:44.239318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.078 [2024-07-25 13:00:44.239330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.335 [2024-07-25 13:00:44.292138] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:52.900 13:00:44 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:52.900 13:00:44 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:52.900 13:00:44 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59779 00:04:52.900 13:00:44 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:52.900 13:00:44 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:53.158 [ 00:04:53.158 "bdev_malloc_delete", 00:04:53.158 "bdev_malloc_create", 00:04:53.158 "bdev_null_resize", 00:04:53.158 "bdev_null_delete", 00:04:53.158 "bdev_null_create", 00:04:53.158 "bdev_nvme_cuse_unregister", 00:04:53.158 "bdev_nvme_cuse_register", 00:04:53.158 "bdev_opal_new_user", 00:04:53.158 "bdev_opal_set_lock_state", 00:04:53.158 "bdev_opal_delete", 00:04:53.158 "bdev_opal_get_info", 00:04:53.158 "bdev_opal_create", 00:04:53.158 "bdev_nvme_opal_revert", 00:04:53.158 "bdev_nvme_opal_init", 00:04:53.158 "bdev_nvme_send_cmd", 00:04:53.158 "bdev_nvme_get_path_iostat", 00:04:53.158 "bdev_nvme_get_mdns_discovery_info", 00:04:53.158 "bdev_nvme_stop_mdns_discovery", 00:04:53.158 "bdev_nvme_start_mdns_discovery", 00:04:53.158 "bdev_nvme_set_multipath_policy", 00:04:53.158 "bdev_nvme_set_preferred_path", 00:04:53.158 "bdev_nvme_get_io_paths", 00:04:53.158 "bdev_nvme_remove_error_injection", 00:04:53.158 "bdev_nvme_add_error_injection", 00:04:53.158 "bdev_nvme_get_discovery_info", 00:04:53.158 "bdev_nvme_stop_discovery", 00:04:53.158 "bdev_nvme_start_discovery", 00:04:53.158 "bdev_nvme_get_controller_health_info", 00:04:53.158 "bdev_nvme_disable_controller", 00:04:53.158 "bdev_nvme_enable_controller", 00:04:53.158 "bdev_nvme_reset_controller", 00:04:53.158 "bdev_nvme_get_transport_statistics", 00:04:53.158 "bdev_nvme_apply_firmware", 00:04:53.158 "bdev_nvme_detach_controller", 00:04:53.158 "bdev_nvme_get_controllers", 00:04:53.158 "bdev_nvme_attach_controller", 00:04:53.158 "bdev_nvme_set_hotplug", 00:04:53.158 "bdev_nvme_set_options", 00:04:53.158 "bdev_passthru_delete", 00:04:53.158 "bdev_passthru_create", 00:04:53.158 "bdev_lvol_set_parent_bdev", 00:04:53.158 "bdev_lvol_set_parent", 00:04:53.158 "bdev_lvol_check_shallow_copy", 00:04:53.158 "bdev_lvol_start_shallow_copy", 00:04:53.158 "bdev_lvol_grow_lvstore", 00:04:53.158 "bdev_lvol_get_lvols", 00:04:53.158 "bdev_lvol_get_lvstores", 00:04:53.158 "bdev_lvol_delete", 00:04:53.158 "bdev_lvol_set_read_only", 00:04:53.158 "bdev_lvol_resize", 00:04:53.158 "bdev_lvol_decouple_parent", 00:04:53.158 "bdev_lvol_inflate", 00:04:53.158 "bdev_lvol_rename", 00:04:53.158 "bdev_lvol_clone_bdev", 00:04:53.158 "bdev_lvol_clone", 00:04:53.158 "bdev_lvol_snapshot", 00:04:53.158 "bdev_lvol_create", 00:04:53.158 "bdev_lvol_delete_lvstore", 00:04:53.158 "bdev_lvol_rename_lvstore", 00:04:53.158 "bdev_lvol_create_lvstore", 00:04:53.158 "bdev_raid_set_options", 00:04:53.159 "bdev_raid_remove_base_bdev", 00:04:53.159 "bdev_raid_add_base_bdev", 00:04:53.159 "bdev_raid_delete", 00:04:53.159 "bdev_raid_create", 00:04:53.159 "bdev_raid_get_bdevs", 00:04:53.159 "bdev_error_inject_error", 00:04:53.159 "bdev_error_delete", 00:04:53.159 "bdev_error_create", 00:04:53.159 "bdev_split_delete", 00:04:53.159 "bdev_split_create", 00:04:53.159 "bdev_delay_delete", 00:04:53.159 "bdev_delay_create", 00:04:53.159 "bdev_delay_update_latency", 00:04:53.159 "bdev_zone_block_delete", 00:04:53.159 "bdev_zone_block_create", 00:04:53.159 "blobfs_create", 00:04:53.159 "blobfs_detect", 00:04:53.159 "blobfs_set_cache_size", 00:04:53.159 "bdev_aio_delete", 00:04:53.159 "bdev_aio_rescan", 00:04:53.159 "bdev_aio_create", 00:04:53.159 "bdev_ftl_set_property", 00:04:53.159 "bdev_ftl_get_properties", 00:04:53.159 "bdev_ftl_get_stats", 00:04:53.159 "bdev_ftl_unmap", 00:04:53.159 "bdev_ftl_unload", 00:04:53.159 "bdev_ftl_delete", 00:04:53.159 "bdev_ftl_load", 00:04:53.159 "bdev_ftl_create", 00:04:53.159 "bdev_virtio_attach_controller", 00:04:53.159 "bdev_virtio_scsi_get_devices", 00:04:53.159 "bdev_virtio_detach_controller", 00:04:53.159 "bdev_virtio_blk_set_hotplug", 00:04:53.159 "bdev_iscsi_delete", 00:04:53.159 "bdev_iscsi_create", 00:04:53.159 "bdev_iscsi_set_options", 00:04:53.159 "bdev_uring_delete", 00:04:53.159 "bdev_uring_rescan", 00:04:53.159 "bdev_uring_create", 00:04:53.159 "accel_error_inject_error", 00:04:53.159 "ioat_scan_accel_module", 00:04:53.159 "dsa_scan_accel_module", 00:04:53.159 "iaa_scan_accel_module", 00:04:53.159 "keyring_file_remove_key", 00:04:53.159 "keyring_file_add_key", 00:04:53.159 "keyring_linux_set_options", 00:04:53.159 "iscsi_get_histogram", 00:04:53.159 "iscsi_enable_histogram", 00:04:53.159 "iscsi_set_options", 00:04:53.159 "iscsi_get_auth_groups", 00:04:53.159 "iscsi_auth_group_remove_secret", 00:04:53.159 "iscsi_auth_group_add_secret", 00:04:53.159 "iscsi_delete_auth_group", 00:04:53.159 "iscsi_create_auth_group", 00:04:53.159 "iscsi_set_discovery_auth", 00:04:53.159 "iscsi_get_options", 00:04:53.159 "iscsi_target_node_request_logout", 00:04:53.159 "iscsi_target_node_set_redirect", 00:04:53.159 "iscsi_target_node_set_auth", 00:04:53.159 "iscsi_target_node_add_lun", 00:04:53.159 "iscsi_get_stats", 00:04:53.159 "iscsi_get_connections", 00:04:53.159 "iscsi_portal_group_set_auth", 00:04:53.159 "iscsi_start_portal_group", 00:04:53.159 "iscsi_delete_portal_group", 00:04:53.159 "iscsi_create_portal_group", 00:04:53.159 "iscsi_get_portal_groups", 00:04:53.159 "iscsi_delete_target_node", 00:04:53.159 "iscsi_target_node_remove_pg_ig_maps", 00:04:53.159 "iscsi_target_node_add_pg_ig_maps", 00:04:53.159 "iscsi_create_target_node", 00:04:53.159 "iscsi_get_target_nodes", 00:04:53.159 "iscsi_delete_initiator_group", 00:04:53.159 "iscsi_initiator_group_remove_initiators", 00:04:53.159 "iscsi_initiator_group_add_initiators", 00:04:53.159 "iscsi_create_initiator_group", 00:04:53.159 "iscsi_get_initiator_groups", 00:04:53.159 "nvmf_set_crdt", 00:04:53.159 "nvmf_set_config", 00:04:53.159 "nvmf_set_max_subsystems", 00:04:53.159 "nvmf_stop_mdns_prr", 00:04:53.159 "nvmf_publish_mdns_prr", 00:04:53.159 "nvmf_subsystem_get_listeners", 00:04:53.159 "nvmf_subsystem_get_qpairs", 00:04:53.159 "nvmf_subsystem_get_controllers", 00:04:53.159 "nvmf_get_stats", 00:04:53.159 "nvmf_get_transports", 00:04:53.159 "nvmf_create_transport", 00:04:53.159 "nvmf_get_targets", 00:04:53.159 "nvmf_delete_target", 00:04:53.159 "nvmf_create_target", 00:04:53.159 "nvmf_subsystem_allow_any_host", 00:04:53.159 "nvmf_subsystem_remove_host", 00:04:53.159 "nvmf_subsystem_add_host", 00:04:53.159 "nvmf_ns_remove_host", 00:04:53.159 "nvmf_ns_add_host", 00:04:53.159 "nvmf_subsystem_remove_ns", 00:04:53.159 "nvmf_subsystem_add_ns", 00:04:53.159 "nvmf_subsystem_listener_set_ana_state", 00:04:53.159 "nvmf_discovery_get_referrals", 00:04:53.159 "nvmf_discovery_remove_referral", 00:04:53.159 "nvmf_discovery_add_referral", 00:04:53.159 "nvmf_subsystem_remove_listener", 00:04:53.159 "nvmf_subsystem_add_listener", 00:04:53.159 "nvmf_delete_subsystem", 00:04:53.159 "nvmf_create_subsystem", 00:04:53.159 "nvmf_get_subsystems", 00:04:53.159 "env_dpdk_get_mem_stats", 00:04:53.159 "nbd_get_disks", 00:04:53.159 "nbd_stop_disk", 00:04:53.159 "nbd_start_disk", 00:04:53.159 "ublk_recover_disk", 00:04:53.159 "ublk_get_disks", 00:04:53.159 "ublk_stop_disk", 00:04:53.159 "ublk_start_disk", 00:04:53.159 "ublk_destroy_target", 00:04:53.159 "ublk_create_target", 00:04:53.159 "virtio_blk_create_transport", 00:04:53.159 "virtio_blk_get_transports", 00:04:53.159 "vhost_controller_set_coalescing", 00:04:53.159 "vhost_get_controllers", 00:04:53.159 "vhost_delete_controller", 00:04:53.159 "vhost_create_blk_controller", 00:04:53.159 "vhost_scsi_controller_remove_target", 00:04:53.159 "vhost_scsi_controller_add_target", 00:04:53.159 "vhost_start_scsi_controller", 00:04:53.159 "vhost_create_scsi_controller", 00:04:53.159 "thread_set_cpumask", 00:04:53.159 "framework_get_governor", 00:04:53.159 "framework_get_scheduler", 00:04:53.159 "framework_set_scheduler", 00:04:53.159 "framework_get_reactors", 00:04:53.159 "thread_get_io_channels", 00:04:53.159 "thread_get_pollers", 00:04:53.159 "thread_get_stats", 00:04:53.159 "framework_monitor_context_switch", 00:04:53.159 "spdk_kill_instance", 00:04:53.159 "log_enable_timestamps", 00:04:53.159 "log_get_flags", 00:04:53.159 "log_clear_flag", 00:04:53.159 "log_set_flag", 00:04:53.159 "log_get_level", 00:04:53.159 "log_set_level", 00:04:53.159 "log_get_print_level", 00:04:53.159 "log_set_print_level", 00:04:53.159 "framework_enable_cpumask_locks", 00:04:53.159 "framework_disable_cpumask_locks", 00:04:53.159 "framework_wait_init", 00:04:53.159 "framework_start_init", 00:04:53.159 "scsi_get_devices", 00:04:53.159 "bdev_get_histogram", 00:04:53.159 "bdev_enable_histogram", 00:04:53.159 "bdev_set_qos_limit", 00:04:53.159 "bdev_set_qd_sampling_period", 00:04:53.159 "bdev_get_bdevs", 00:04:53.159 "bdev_reset_iostat", 00:04:53.159 "bdev_get_iostat", 00:04:53.159 "bdev_examine", 00:04:53.159 "bdev_wait_for_examine", 00:04:53.159 "bdev_set_options", 00:04:53.159 "notify_get_notifications", 00:04:53.159 "notify_get_types", 00:04:53.159 "accel_get_stats", 00:04:53.159 "accel_set_options", 00:04:53.159 "accel_set_driver", 00:04:53.159 "accel_crypto_key_destroy", 00:04:53.159 "accel_crypto_keys_get", 00:04:53.159 "accel_crypto_key_create", 00:04:53.159 "accel_assign_opc", 00:04:53.159 "accel_get_module_info", 00:04:53.159 "accel_get_opc_assignments", 00:04:53.159 "vmd_rescan", 00:04:53.159 "vmd_remove_device", 00:04:53.159 "vmd_enable", 00:04:53.159 "sock_get_default_impl", 00:04:53.159 "sock_set_default_impl", 00:04:53.159 "sock_impl_set_options", 00:04:53.159 "sock_impl_get_options", 00:04:53.159 "iobuf_get_stats", 00:04:53.159 "iobuf_set_options", 00:04:53.159 "framework_get_pci_devices", 00:04:53.159 "framework_get_config", 00:04:53.159 "framework_get_subsystems", 00:04:53.159 "trace_get_info", 00:04:53.159 "trace_get_tpoint_group_mask", 00:04:53.159 "trace_disable_tpoint_group", 00:04:53.159 "trace_enable_tpoint_group", 00:04:53.159 "trace_clear_tpoint_mask", 00:04:53.159 "trace_set_tpoint_mask", 00:04:53.159 "keyring_get_keys", 00:04:53.159 "spdk_get_version", 00:04:53.159 "rpc_get_methods" 00:04:53.159 ] 00:04:53.159 13:00:45 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:53.159 13:00:45 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:53.159 13:00:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:53.159 13:00:45 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:53.159 13:00:45 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59762 00:04:53.159 13:00:45 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 59762 ']' 00:04:53.159 13:00:45 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 59762 00:04:53.159 13:00:45 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:53.159 13:00:45 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:53.159 13:00:45 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59762 00:04:53.159 killing process with pid 59762 00:04:53.159 13:00:45 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:53.159 13:00:45 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:53.159 13:00:45 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59762' 00:04:53.159 13:00:45 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 59762 00:04:53.159 13:00:45 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 59762 00:04:53.723 ************************************ 00:04:53.723 END TEST spdkcli_tcp 00:04:53.723 ************************************ 00:04:53.723 00:04:53.723 real 0m1.815s 00:04:53.723 user 0m3.403s 00:04:53.723 sys 0m0.460s 00:04:53.723 13:00:45 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.723 13:00:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:53.723 13:00:45 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:53.723 13:00:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.723 13:00:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.723 13:00:45 -- common/autotest_common.sh@10 -- # set +x 00:04:53.723 ************************************ 00:04:53.723 START TEST dpdk_mem_utility 00:04:53.724 ************************************ 00:04:53.724 13:00:45 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:53.724 * Looking for test storage... 00:04:53.724 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:53.724 13:00:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:53.724 13:00:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59853 00:04:53.724 13:00:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.724 13:00:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59853 00:04:53.724 13:00:45 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 59853 ']' 00:04:53.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.724 13:00:45 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.724 13:00:45 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:53.724 13:00:45 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.724 13:00:45 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:53.724 13:00:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:53.724 [2024-07-25 13:00:45.855894] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:53.724 [2024-07-25 13:00:45.855992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59853 ] 00:04:53.981 [2024-07-25 13:00:45.985345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.981 [2024-07-25 13:00:46.095694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.981 [2024-07-25 13:00:46.150349] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:54.247 13:00:46 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:54.247 13:00:46 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:54.247 13:00:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:54.247 13:00:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:54.247 13:00:46 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.247 13:00:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:54.247 { 00:04:54.247 "filename": "/tmp/spdk_mem_dump.txt" 00:04:54.247 } 00:04:54.247 13:00:46 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.247 13:00:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:54.505 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:54.505 1 heaps totaling size 814.000000 MiB 00:04:54.505 size: 814.000000 MiB heap id: 0 00:04:54.505 end heaps---------- 00:04:54.505 8 mempools totaling size 598.116089 MiB 00:04:54.505 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:54.505 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:54.505 size: 84.521057 MiB name: bdev_io_59853 00:04:54.505 size: 51.011292 MiB name: evtpool_59853 00:04:54.505 size: 50.003479 MiB name: msgpool_59853 00:04:54.505 size: 21.763794 MiB name: PDU_Pool 00:04:54.505 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:54.505 size: 0.026123 MiB name: Session_Pool 00:04:54.505 end mempools------- 00:04:54.505 6 memzones totaling size 4.142822 MiB 00:04:54.505 size: 1.000366 MiB name: RG_ring_0_59853 00:04:54.505 size: 1.000366 MiB name: RG_ring_1_59853 00:04:54.505 size: 1.000366 MiB name: RG_ring_4_59853 00:04:54.505 size: 1.000366 MiB name: RG_ring_5_59853 00:04:54.505 size: 0.125366 MiB name: RG_ring_2_59853 00:04:54.505 size: 0.015991 MiB name: RG_ring_3_59853 00:04:54.505 end memzones------- 00:04:54.505 13:00:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:54.505 heap id: 0 total size: 814.000000 MiB number of busy elements: 301 number of free elements: 15 00:04:54.505 list of free elements. size: 12.471741 MiB 00:04:54.505 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:54.505 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:54.505 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:54.505 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:54.505 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:54.505 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:54.505 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:54.505 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:54.505 element at address: 0x200000200000 with size: 0.833191 MiB 00:04:54.505 element at address: 0x20001aa00000 with size: 0.568970 MiB 00:04:54.505 element at address: 0x20000b200000 with size: 0.488892 MiB 00:04:54.505 element at address: 0x200000800000 with size: 0.486145 MiB 00:04:54.505 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:54.505 element at address: 0x200027e00000 with size: 0.395935 MiB 00:04:54.505 element at address: 0x200003a00000 with size: 0.347839 MiB 00:04:54.505 list of standard malloc elements. size: 199.265686 MiB 00:04:54.505 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:54.505 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:54.505 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:54.505 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:54.505 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:54.505 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:54.505 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:54.505 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:54.505 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:54.505 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:54.505 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:54.506 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:54.506 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:54.506 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:54.506 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:54.506 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:54.506 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:54.506 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:54.506 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:54.506 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:54.506 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:54.506 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20000087c740 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20000087c800 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20000087c980 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a59180 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a59240 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a59300 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a59480 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a59540 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a59600 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a59780 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a59840 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a59900 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:54.506 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:54.506 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:54.506 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:54.506 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e65680 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6c280 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:54.506 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:54.506 list of memzone associated elements. size: 602.262573 MiB 00:04:54.506 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:54.506 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:54.506 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:54.506 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:54.506 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:54.507 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59853_0 00:04:54.507 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:54.507 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59853_0 00:04:54.507 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:54.507 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59853_0 00:04:54.507 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:54.507 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:54.507 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:54.507 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:54.507 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:54.507 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59853 00:04:54.507 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:54.507 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59853 00:04:54.507 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:54.507 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59853 00:04:54.507 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:54.507 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:54.507 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:54.507 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:54.507 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:54.507 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:54.507 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:54.507 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:54.507 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:54.507 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59853 00:04:54.507 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:54.507 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59853 00:04:54.507 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:54.507 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59853 00:04:54.507 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:54.507 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59853 00:04:54.507 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:54.507 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59853 00:04:54.507 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:54.507 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:54.507 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:54.507 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:54.507 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:54.507 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:54.507 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:54.507 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59853 00:04:54.507 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:54.507 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:54.507 element at address: 0x200027e65740 with size: 0.023743 MiB 00:04:54.507 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:54.507 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:54.507 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59853 00:04:54.507 element at address: 0x200027e6b880 with size: 0.002441 MiB 00:04:54.507 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:54.507 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:04:54.507 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59853 00:04:54.507 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:54.507 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59853 00:04:54.507 element at address: 0x200027e6c340 with size: 0.000305 MiB 00:04:54.507 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:54.507 13:00:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:54.507 13:00:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59853 00:04:54.507 13:00:46 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 59853 ']' 00:04:54.507 13:00:46 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 59853 00:04:54.507 13:00:46 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:54.507 13:00:46 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:54.507 13:00:46 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59853 00:04:54.507 killing process with pid 59853 00:04:54.507 13:00:46 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:54.507 13:00:46 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:54.507 13:00:46 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59853' 00:04:54.507 13:00:46 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 59853 00:04:54.507 13:00:46 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 59853 00:04:54.765 ************************************ 00:04:54.765 END TEST dpdk_mem_utility 00:04:54.765 ************************************ 00:04:54.765 00:04:54.765 real 0m1.228s 00:04:54.765 user 0m1.240s 00:04:54.765 sys 0m0.394s 00:04:54.765 13:00:46 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.765 13:00:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:55.024 13:00:46 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:55.024 13:00:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:55.024 13:00:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.024 13:00:46 -- common/autotest_common.sh@10 -- # set +x 00:04:55.024 ************************************ 00:04:55.024 START TEST event 00:04:55.024 ************************************ 00:04:55.024 13:00:47 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:55.024 * Looking for test storage... 00:04:55.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:55.024 13:00:47 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:55.024 13:00:47 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:55.024 13:00:47 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:55.024 13:00:47 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:55.024 13:00:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.024 13:00:47 event -- common/autotest_common.sh@10 -- # set +x 00:04:55.024 ************************************ 00:04:55.024 START TEST event_perf 00:04:55.024 ************************************ 00:04:55.024 13:00:47 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:55.024 Running I/O for 1 seconds...[2024-07-25 13:00:47.108185] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:55.024 [2024-07-25 13:00:47.109048] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59923 ] 00:04:55.281 [2024-07-25 13:00:47.253414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:55.281 [2024-07-25 13:00:47.351094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.281 [2024-07-25 13:00:47.351257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:55.281 [2024-07-25 13:00:47.351407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:55.281 [2024-07-25 13:00:47.351410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.670 Running I/O for 1 seconds... 00:04:56.670 lcore 0: 209280 00:04:56.670 lcore 1: 209278 00:04:56.670 lcore 2: 209279 00:04:56.670 lcore 3: 209279 00:04:56.670 done. 00:04:56.670 00:04:56.670 real 0m1.339s 00:04:56.670 user 0m4.147s 00:04:56.670 sys 0m0.069s 00:04:56.670 13:00:48 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.670 13:00:48 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:56.670 ************************************ 00:04:56.670 END TEST event_perf 00:04:56.670 ************************************ 00:04:56.670 13:00:48 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:56.670 13:00:48 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:56.670 13:00:48 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.670 13:00:48 event -- common/autotest_common.sh@10 -- # set +x 00:04:56.670 ************************************ 00:04:56.670 START TEST event_reactor 00:04:56.670 ************************************ 00:04:56.670 13:00:48 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:56.670 [2024-07-25 13:00:48.502617] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:56.670 [2024-07-25 13:00:48.503250] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59956 ] 00:04:56.670 [2024-07-25 13:00:48.641093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.670 [2024-07-25 13:00:48.722355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.057 test_start 00:04:58.057 oneshot 00:04:58.057 tick 100 00:04:58.057 tick 100 00:04:58.057 tick 250 00:04:58.057 tick 100 00:04:58.057 tick 100 00:04:58.057 tick 100 00:04:58.057 tick 250 00:04:58.057 tick 500 00:04:58.057 tick 100 00:04:58.057 tick 100 00:04:58.057 tick 250 00:04:58.057 tick 100 00:04:58.057 tick 100 00:04:58.057 test_end 00:04:58.057 00:04:58.057 real 0m1.340s 00:04:58.057 user 0m1.182s 00:04:58.057 sys 0m0.053s 00:04:58.057 13:00:49 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.057 13:00:49 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:58.057 ************************************ 00:04:58.057 END TEST event_reactor 00:04:58.057 ************************************ 00:04:58.057 13:00:49 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:58.057 13:00:49 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:58.057 13:00:49 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.057 13:00:49 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.057 ************************************ 00:04:58.057 START TEST event_reactor_perf 00:04:58.057 ************************************ 00:04:58.057 13:00:49 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:58.057 [2024-07-25 13:00:49.898027] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:58.057 [2024-07-25 13:00:49.898120] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59991 ] 00:04:58.057 [2024-07-25 13:00:50.032165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.057 [2024-07-25 13:00:50.147395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.432 test_start 00:04:59.432 test_end 00:04:59.432 Performance: 423753 events per second 00:04:59.432 00:04:59.432 real 0m1.383s 00:04:59.432 user 0m1.220s 00:04:59.432 sys 0m0.056s 00:04:59.432 13:00:51 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.432 13:00:51 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:59.432 ************************************ 00:04:59.432 END TEST event_reactor_perf 00:04:59.432 ************************************ 00:04:59.432 13:00:51 event -- event/event.sh@49 -- # uname -s 00:04:59.432 13:00:51 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:59.432 13:00:51 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:59.432 13:00:51 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.432 13:00:51 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.432 13:00:51 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.432 ************************************ 00:04:59.432 START TEST event_scheduler 00:04:59.432 ************************************ 00:04:59.432 13:00:51 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:59.432 * Looking for test storage... 00:04:59.432 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:59.432 13:00:51 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:59.432 13:00:51 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60053 00:04:59.432 13:00:51 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:59.432 13:00:51 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.432 13:00:51 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60053 00:04:59.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.432 13:00:51 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 60053 ']' 00:04:59.432 13:00:51 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.432 13:00:51 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:59.432 13:00:51 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.432 13:00:51 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:59.432 13:00:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:59.432 [2024-07-25 13:00:51.448200] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:59.432 [2024-07-25 13:00:51.448596] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60053 ] 00:04:59.432 [2024-07-25 13:00:51.586943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:59.689 [2024-07-25 13:00:51.725341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.689 [2024-07-25 13:00:51.725468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.689 [2024-07-25 13:00:51.725529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:59.689 [2024-07-25 13:00:51.725535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:00.252 13:00:52 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:00.252 13:00:52 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:00.252 13:00:52 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:00.252 13:00:52 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.252 13:00:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:00.510 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:00.510 POWER: Cannot set governor of lcore 0 to userspace 00:05:00.510 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:00.510 POWER: Cannot set governor of lcore 0 to performance 00:05:00.510 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:00.510 POWER: Cannot set governor of lcore 0 to userspace 00:05:00.510 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:00.510 POWER: Cannot set governor of lcore 0 to userspace 00:05:00.510 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:00.510 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:00.510 POWER: Unable to set Power Management Environment for lcore 0 00:05:00.510 [2024-07-25 13:00:52.448795] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:00.510 [2024-07-25 13:00:52.448921] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:00.510 [2024-07-25 13:00:52.448973] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:00.510 [2024-07-25 13:00:52.449125] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:00.510 [2024-07-25 13:00:52.449238] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:00.510 [2024-07-25 13:00:52.449286] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:00.510 13:00:52 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.510 13:00:52 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:00.510 13:00:52 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.510 13:00:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:00.510 [2024-07-25 13:00:52.512535] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:00.510 [2024-07-25 13:00:52.545460] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:00.510 13:00:52 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.510 13:00:52 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:00.510 13:00:52 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.510 13:00:52 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.510 13:00:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:00.510 ************************************ 00:05:00.510 START TEST scheduler_create_thread 00:05:00.510 ************************************ 00:05:00.510 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:00.510 13:00:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:00.510 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.510 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.510 2 00:05:00.510 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.510 13:00:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:00.510 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.510 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.510 3 00:05:00.510 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.510 13:00:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:00.510 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.510 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.510 4 00:05:00.510 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.510 13:00:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:00.510 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.510 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.510 5 00:05:00.510 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.510 13:00:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:00.510 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.510 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.510 6 00:05:00.510 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.510 13:00:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:00.510 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.510 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.510 7 00:05:00.510 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.510 13:00:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:00.510 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.510 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.510 8 00:05:00.510 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.510 13:00:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:00.510 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.510 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.510 9 00:05:00.511 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.511 13:00:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:00.511 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.511 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.511 10 00:05:00.511 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.511 13:00:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:00.511 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.511 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.511 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.511 13:00:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:00.511 13:00:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:00.511 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.511 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.511 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.511 13:00:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:00.511 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.511 13:00:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.409 13:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.409 13:00:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:02.409 13:00:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:02.409 13:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.409 13:00:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.363 ************************************ 00:05:03.363 END TEST scheduler_create_thread 00:05:03.363 ************************************ 00:05:03.363 13:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.363 00:05:03.363 real 0m2.615s 00:05:03.363 user 0m0.015s 00:05:03.363 sys 0m0.009s 00:05:03.363 13:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.363 13:00:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.363 13:00:55 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:03.363 13:00:55 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60053 00:05:03.363 13:00:55 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 60053 ']' 00:05:03.363 13:00:55 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 60053 00:05:03.363 13:00:55 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:03.363 13:00:55 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:03.363 13:00:55 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60053 00:05:03.363 killing process with pid 60053 00:05:03.363 13:00:55 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:03.363 13:00:55 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:03.363 13:00:55 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60053' 00:05:03.363 13:00:55 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 60053 00:05:03.363 13:00:55 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 60053 00:05:03.622 [2024-07-25 13:00:55.653844] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:03.880 00:05:03.880 real 0m4.575s 00:05:03.880 user 0m8.601s 00:05:03.880 sys 0m0.370s 00:05:03.880 13:00:55 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.880 ************************************ 00:05:03.880 END TEST event_scheduler 00:05:03.880 ************************************ 00:05:03.880 13:00:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.880 13:00:55 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:03.880 13:00:55 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:03.880 13:00:55 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.880 13:00:55 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.880 13:00:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.880 ************************************ 00:05:03.880 START TEST app_repeat 00:05:03.880 ************************************ 00:05:03.880 13:00:55 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:03.880 13:00:55 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.880 13:00:55 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.880 13:00:55 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:03.880 13:00:55 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.880 13:00:55 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:03.880 13:00:55 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:03.880 13:00:55 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:03.880 Process app_repeat pid: 60152 00:05:03.880 spdk_app_start Round 0 00:05:03.880 13:00:55 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60152 00:05:03.881 13:00:55 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.881 13:00:55 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60152' 00:05:03.881 13:00:55 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:03.881 13:00:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:03.881 13:00:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:03.881 13:00:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60152 /var/tmp/spdk-nbd.sock 00:05:03.881 13:00:55 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60152 ']' 00:05:03.881 13:00:55 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:03.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:03.881 13:00:55 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:03.881 13:00:55 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:03.881 13:00:55 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:03.881 13:00:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:03.881 [2024-07-25 13:00:55.980467] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:03.881 [2024-07-25 13:00:55.980551] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60152 ] 00:05:04.139 [2024-07-25 13:00:56.111243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.139 [2024-07-25 13:00:56.207325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.139 [2024-07-25 13:00:56.207335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.139 [2024-07-25 13:00:56.261501] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:05.076 13:00:56 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:05.076 13:00:56 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:05.076 13:00:56 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.076 Malloc0 00:05:05.076 13:00:57 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.333 Malloc1 00:05:05.333 13:00:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.333 13:00:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.333 13:00:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.333 13:00:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:05.333 13:00:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.333 13:00:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:05.333 13:00:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.333 13:00:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.333 13:00:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.333 13:00:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:05.333 13:00:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.333 13:00:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:05.333 13:00:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:05.333 13:00:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:05.333 13:00:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.333 13:00:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:05.591 /dev/nbd0 00:05:05.591 13:00:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:05.591 13:00:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:05.591 13:00:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:05.591 13:00:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:05.591 13:00:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:05.591 13:00:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:05.591 13:00:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:05.591 13:00:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:05.591 13:00:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:05.591 13:00:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:05.591 13:00:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.591 1+0 records in 00:05:05.591 1+0 records out 00:05:05.591 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000475611 s, 8.6 MB/s 00:05:05.591 13:00:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.591 13:00:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:05.591 13:00:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.591 13:00:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:05.591 13:00:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:05.591 13:00:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.591 13:00:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.591 13:00:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:05.849 /dev/nbd1 00:05:05.849 13:00:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:05.849 13:00:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:05.849 13:00:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:05.849 13:00:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:05.849 13:00:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:05.849 13:00:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:05.849 13:00:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:05.849 13:00:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:05.849 13:00:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:05.849 13:00:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:05.849 13:00:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.849 1+0 records in 00:05:05.849 1+0 records out 00:05:05.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000610127 s, 6.7 MB/s 00:05:05.849 13:00:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.849 13:00:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:05.849 13:00:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.849 13:00:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:05.849 13:00:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:05.849 13:00:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.849 13:00:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.849 13:00:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.849 13:00:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.849 13:00:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.107 13:00:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:06.107 { 00:05:06.107 "nbd_device": "/dev/nbd0", 00:05:06.107 "bdev_name": "Malloc0" 00:05:06.107 }, 00:05:06.107 { 00:05:06.107 "nbd_device": "/dev/nbd1", 00:05:06.107 "bdev_name": "Malloc1" 00:05:06.107 } 00:05:06.107 ]' 00:05:06.107 13:00:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:06.107 { 00:05:06.107 "nbd_device": "/dev/nbd0", 00:05:06.107 "bdev_name": "Malloc0" 00:05:06.107 }, 00:05:06.107 { 00:05:06.107 "nbd_device": "/dev/nbd1", 00:05:06.107 "bdev_name": "Malloc1" 00:05:06.107 } 00:05:06.107 ]' 00:05:06.107 13:00:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.366 13:00:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:06.366 /dev/nbd1' 00:05:06.366 13:00:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:06.366 /dev/nbd1' 00:05:06.366 13:00:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.366 13:00:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:06.366 13:00:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:06.366 13:00:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:06.366 13:00:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:06.366 13:00:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:06.366 13:00:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.366 13:00:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.366 13:00:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:06.366 13:00:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.366 13:00:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:06.366 13:00:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:06.366 256+0 records in 00:05:06.366 256+0 records out 00:05:06.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00697788 s, 150 MB/s 00:05:06.366 13:00:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.366 13:00:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:06.366 256+0 records in 00:05:06.366 256+0 records out 00:05:06.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270865 s, 38.7 MB/s 00:05:06.366 13:00:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.366 13:00:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:06.366 256+0 records in 00:05:06.366 256+0 records out 00:05:06.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02557 s, 41.0 MB/s 00:05:06.366 13:00:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:06.366 13:00:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.366 13:00:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.366 13:00:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:06.366 13:00:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.366 13:00:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:06.366 13:00:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:06.366 13:00:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.367 13:00:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:06.367 13:00:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.367 13:00:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:06.367 13:00:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.367 13:00:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:06.367 13:00:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.367 13:00:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.367 13:00:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:06.367 13:00:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:06.367 13:00:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.367 13:00:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:06.625 13:00:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:06.625 13:00:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:06.625 13:00:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:06.625 13:00:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.625 13:00:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.625 13:00:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:06.625 13:00:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.625 13:00:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.625 13:00:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.625 13:00:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:06.883 13:00:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:06.883 13:00:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:06.883 13:00:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:06.883 13:00:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.883 13:00:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.883 13:00:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:06.883 13:00:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.883 13:00:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.883 13:00:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.883 13:00:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.883 13:00:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.142 13:00:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:07.142 13:00:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.142 13:00:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:07.142 13:00:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:07.142 13:00:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:07.142 13:00:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.142 13:00:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:07.142 13:00:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:07.142 13:00:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:07.142 13:00:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:07.142 13:00:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:07.142 13:00:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:07.142 13:00:59 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:07.399 13:00:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:07.657 [2024-07-25 13:00:59.759327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:07.915 [2024-07-25 13:00:59.855919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.915 [2024-07-25 13:00:59.855929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.915 [2024-07-25 13:00:59.910928] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:07.916 [2024-07-25 13:00:59.911025] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:07.916 [2024-07-25 13:00:59.911040] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:10.447 spdk_app_start Round 1 00:05:10.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:10.447 13:01:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:10.447 13:01:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:10.447 13:01:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60152 /var/tmp/spdk-nbd.sock 00:05:10.447 13:01:02 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60152 ']' 00:05:10.447 13:01:02 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:10.447 13:01:02 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.447 13:01:02 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:10.447 13:01:02 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.447 13:01:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:10.705 13:01:02 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.705 13:01:02 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:10.705 13:01:02 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.967 Malloc0 00:05:10.967 13:01:03 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.230 Malloc1 00:05:11.231 13:01:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.231 13:01:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.231 13:01:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.231 13:01:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:11.231 13:01:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.231 13:01:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:11.231 13:01:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.231 13:01:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.231 13:01:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.231 13:01:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:11.231 13:01:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.231 13:01:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:11.231 13:01:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:11.231 13:01:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:11.231 13:01:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.231 13:01:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:11.488 /dev/nbd0 00:05:11.488 13:01:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:11.488 13:01:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:11.488 13:01:03 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:11.488 13:01:03 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:11.488 13:01:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:11.488 13:01:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:11.488 13:01:03 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:11.488 13:01:03 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:11.488 13:01:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:11.488 13:01:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:11.488 13:01:03 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.488 1+0 records in 00:05:11.488 1+0 records out 00:05:11.488 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025472 s, 16.1 MB/s 00:05:11.488 13:01:03 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.488 13:01:03 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:11.488 13:01:03 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.488 13:01:03 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:11.488 13:01:03 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:11.488 13:01:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.488 13:01:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.488 13:01:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:11.746 /dev/nbd1 00:05:11.746 13:01:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:11.746 13:01:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:11.746 13:01:03 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:11.746 13:01:03 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:11.746 13:01:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:11.746 13:01:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:11.746 13:01:03 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:11.746 13:01:03 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:11.746 13:01:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:11.746 13:01:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:11.746 13:01:03 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.746 1+0 records in 00:05:11.746 1+0 records out 00:05:11.746 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375388 s, 10.9 MB/s 00:05:11.746 13:01:03 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.746 13:01:03 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:11.746 13:01:03 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.746 13:01:03 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:11.746 13:01:03 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:11.746 13:01:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.746 13:01:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.746 13:01:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.746 13:01:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.746 13:01:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:12.314 { 00:05:12.314 "nbd_device": "/dev/nbd0", 00:05:12.314 "bdev_name": "Malloc0" 00:05:12.314 }, 00:05:12.314 { 00:05:12.314 "nbd_device": "/dev/nbd1", 00:05:12.314 "bdev_name": "Malloc1" 00:05:12.314 } 00:05:12.314 ]' 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:12.314 { 00:05:12.314 "nbd_device": "/dev/nbd0", 00:05:12.314 "bdev_name": "Malloc0" 00:05:12.314 }, 00:05:12.314 { 00:05:12.314 "nbd_device": "/dev/nbd1", 00:05:12.314 "bdev_name": "Malloc1" 00:05:12.314 } 00:05:12.314 ]' 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:12.314 /dev/nbd1' 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:12.314 /dev/nbd1' 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:12.314 256+0 records in 00:05:12.314 256+0 records out 00:05:12.314 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00877886 s, 119 MB/s 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:12.314 256+0 records in 00:05:12.314 256+0 records out 00:05:12.314 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021753 s, 48.2 MB/s 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:12.314 256+0 records in 00:05:12.314 256+0 records out 00:05:12.314 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.032023 s, 32.7 MB/s 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.314 13:01:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:12.573 13:01:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:12.573 13:01:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:12.573 13:01:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:12.573 13:01:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.573 13:01:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.573 13:01:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:12.573 13:01:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.573 13:01:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.573 13:01:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.573 13:01:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:12.831 13:01:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:12.832 13:01:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:12.832 13:01:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:12.832 13:01:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.832 13:01:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.832 13:01:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:12.832 13:01:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.832 13:01:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.832 13:01:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.832 13:01:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.832 13:01:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.090 13:01:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:13.090 13:01:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:13.090 13:01:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.090 13:01:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:13.090 13:01:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:13.090 13:01:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.090 13:01:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:13.090 13:01:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:13.090 13:01:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:13.090 13:01:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:13.090 13:01:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:13.090 13:01:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:13.090 13:01:05 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:13.349 13:01:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:13.608 [2024-07-25 13:01:05.609763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.608 [2024-07-25 13:01:05.678466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.608 [2024-07-25 13:01:05.678475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.608 [2024-07-25 13:01:05.732119] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:13.608 [2024-07-25 13:01:05.732234] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:13.608 [2024-07-25 13:01:05.732247] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:16.895 spdk_app_start Round 2 00:05:16.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:16.895 13:01:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:16.895 13:01:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:16.895 13:01:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60152 /var/tmp/spdk-nbd.sock 00:05:16.895 13:01:08 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60152 ']' 00:05:16.895 13:01:08 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:16.895 13:01:08 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:16.895 13:01:08 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:16.895 13:01:08 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:16.895 13:01:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.895 13:01:08 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:16.895 13:01:08 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:16.895 13:01:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.895 Malloc0 00:05:16.895 13:01:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.159 Malloc1 00:05:17.159 13:01:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.159 13:01:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.159 13:01:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.159 13:01:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:17.159 13:01:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.159 13:01:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:17.159 13:01:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.159 13:01:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.159 13:01:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.159 13:01:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:17.159 13:01:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.159 13:01:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:17.159 13:01:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:17.159 13:01:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:17.159 13:01:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.159 13:01:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:17.419 /dev/nbd0 00:05:17.419 13:01:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:17.419 13:01:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:17.419 13:01:09 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:17.419 13:01:09 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:17.419 13:01:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:17.419 13:01:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:17.419 13:01:09 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:17.419 13:01:09 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:17.419 13:01:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:17.419 13:01:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:17.419 13:01:09 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.419 1+0 records in 00:05:17.419 1+0 records out 00:05:17.419 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280852 s, 14.6 MB/s 00:05:17.419 13:01:09 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.419 13:01:09 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:17.419 13:01:09 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.419 13:01:09 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:17.419 13:01:09 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:17.419 13:01:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.419 13:01:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.419 13:01:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:17.678 /dev/nbd1 00:05:17.678 13:01:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:17.678 13:01:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:17.678 13:01:09 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:17.678 13:01:09 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:17.678 13:01:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:17.678 13:01:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:17.678 13:01:09 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:17.678 13:01:09 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:17.678 13:01:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:17.678 13:01:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:17.678 13:01:09 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.678 1+0 records in 00:05:17.678 1+0 records out 00:05:17.678 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227973 s, 18.0 MB/s 00:05:17.678 13:01:09 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.678 13:01:09 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:17.678 13:01:09 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.678 13:01:09 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:17.678 13:01:09 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:17.678 13:01:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.678 13:01:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.678 13:01:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.678 13:01:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.678 13:01:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.937 13:01:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:17.937 { 00:05:17.937 "nbd_device": "/dev/nbd0", 00:05:17.937 "bdev_name": "Malloc0" 00:05:17.937 }, 00:05:17.937 { 00:05:17.937 "nbd_device": "/dev/nbd1", 00:05:17.937 "bdev_name": "Malloc1" 00:05:17.937 } 00:05:17.937 ]' 00:05:17.937 13:01:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:17.937 { 00:05:17.937 "nbd_device": "/dev/nbd0", 00:05:17.937 "bdev_name": "Malloc0" 00:05:17.937 }, 00:05:17.937 { 00:05:17.937 "nbd_device": "/dev/nbd1", 00:05:17.937 "bdev_name": "Malloc1" 00:05:17.937 } 00:05:17.937 ]' 00:05:17.937 13:01:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.937 13:01:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:17.937 /dev/nbd1' 00:05:17.937 13:01:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:17.937 /dev/nbd1' 00:05:17.937 13:01:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.937 13:01:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:17.937 13:01:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:17.937 13:01:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:17.937 13:01:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:17.937 13:01:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:17.937 13:01:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.937 13:01:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.937 13:01:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:17.937 13:01:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:17.937 13:01:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:17.937 13:01:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:17.937 256+0 records in 00:05:17.937 256+0 records out 00:05:17.937 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00791238 s, 133 MB/s 00:05:17.937 13:01:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.937 13:01:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:17.937 256+0 records in 00:05:17.937 256+0 records out 00:05:17.937 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227827 s, 46.0 MB/s 00:05:17.937 13:01:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.937 13:01:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.196 256+0 records in 00:05:18.196 256+0 records out 00:05:18.196 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249458 s, 42.0 MB/s 00:05:18.196 13:01:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.196 13:01:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.196 13:01:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.196 13:01:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.196 13:01:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.196 13:01:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.196 13:01:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.196 13:01:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.196 13:01:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:18.196 13:01:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.196 13:01:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:18.196 13:01:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.196 13:01:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:18.196 13:01:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.196 13:01:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.196 13:01:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:18.196 13:01:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:18.196 13:01:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.196 13:01:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:18.453 13:01:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:18.453 13:01:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:18.453 13:01:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:18.453 13:01:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.453 13:01:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.453 13:01:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:18.453 13:01:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.453 13:01:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.453 13:01:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.453 13:01:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:18.711 13:01:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:18.711 13:01:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:18.711 13:01:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:18.711 13:01:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.711 13:01:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.711 13:01:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:18.711 13:01:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.711 13:01:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.711 13:01:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.711 13:01:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.711 13:01:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.970 13:01:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:18.970 13:01:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:18.970 13:01:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.970 13:01:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:18.970 13:01:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:18.970 13:01:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.970 13:01:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:18.970 13:01:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:18.970 13:01:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:18.970 13:01:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:18.970 13:01:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:18.970 13:01:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:18.970 13:01:11 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:19.228 13:01:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:19.487 [2024-07-25 13:01:11.515440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.487 [2024-07-25 13:01:11.589281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.487 [2024-07-25 13:01:11.589296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.487 [2024-07-25 13:01:11.646237] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:19.487 [2024-07-25 13:01:11.646406] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:19.487 [2024-07-25 13:01:11.646421] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:22.768 13:01:14 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60152 /var/tmp/spdk-nbd.sock 00:05:22.768 13:01:14 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60152 ']' 00:05:22.768 13:01:14 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.768 13:01:14 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.768 13:01:14 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.768 13:01:14 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.768 13:01:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.768 13:01:14 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.768 13:01:14 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:22.768 13:01:14 event.app_repeat -- event/event.sh@39 -- # killprocess 60152 00:05:22.768 13:01:14 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 60152 ']' 00:05:22.768 13:01:14 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 60152 00:05:22.768 13:01:14 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:22.768 13:01:14 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:22.768 13:01:14 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60152 00:05:22.768 13:01:14 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:22.768 13:01:14 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:22.768 killing process with pid 60152 00:05:22.768 13:01:14 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60152' 00:05:22.768 13:01:14 event.app_repeat -- common/autotest_common.sh@969 -- # kill 60152 00:05:22.768 13:01:14 event.app_repeat -- common/autotest_common.sh@974 -- # wait 60152 00:05:22.768 spdk_app_start is called in Round 0. 00:05:22.768 Shutdown signal received, stop current app iteration 00:05:22.768 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:05:22.768 spdk_app_start is called in Round 1. 00:05:22.768 Shutdown signal received, stop current app iteration 00:05:22.768 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:05:22.768 spdk_app_start is called in Round 2. 00:05:22.768 Shutdown signal received, stop current app iteration 00:05:22.768 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:05:22.768 spdk_app_start is called in Round 3. 00:05:22.768 Shutdown signal received, stop current app iteration 00:05:22.768 13:01:14 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:22.768 13:01:14 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:22.768 00:05:22.768 real 0m18.858s 00:05:22.768 user 0m42.161s 00:05:22.768 sys 0m2.943s 00:05:22.768 13:01:14 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.768 13:01:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.768 ************************************ 00:05:22.768 END TEST app_repeat 00:05:22.768 ************************************ 00:05:22.768 13:01:14 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:22.768 13:01:14 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:22.768 13:01:14 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.768 13:01:14 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.768 13:01:14 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.768 ************************************ 00:05:22.768 START TEST cpu_locks 00:05:22.768 ************************************ 00:05:22.768 13:01:14 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:22.768 * Looking for test storage... 00:05:22.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:22.768 13:01:14 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:22.768 13:01:14 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:22.768 13:01:14 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:22.768 13:01:14 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:22.768 13:01:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.768 13:01:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.768 13:01:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.768 ************************************ 00:05:22.768 START TEST default_locks 00:05:22.768 ************************************ 00:05:22.768 13:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:22.768 13:01:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60585 00:05:22.768 13:01:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60585 00:05:22.768 13:01:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.768 13:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60585 ']' 00:05:22.768 13:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.768 13:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.768 13:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.768 13:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.768 13:01:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.027 [2024-07-25 13:01:15.007288] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:23.027 [2024-07-25 13:01:15.007392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60585 ] 00:05:23.027 [2024-07-25 13:01:15.146420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.285 [2024-07-25 13:01:15.247750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.285 [2024-07-25 13:01:15.301458] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:23.852 13:01:15 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:23.852 13:01:15 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:23.852 13:01:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60585 00:05:23.852 13:01:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:23.852 13:01:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60585 00:05:24.419 13:01:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60585 00:05:24.419 13:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 60585 ']' 00:05:24.419 13:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 60585 00:05:24.419 13:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:24.419 13:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:24.419 13:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60585 00:05:24.419 13:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:24.419 13:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:24.419 13:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60585' 00:05:24.419 killing process with pid 60585 00:05:24.419 13:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 60585 00:05:24.419 13:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 60585 00:05:24.986 13:01:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60585 00:05:24.986 13:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:24.986 13:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60585 00:05:24.986 13:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:24.986 13:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:24.986 13:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:24.986 13:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:24.986 13:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 60585 00:05:24.986 13:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60585 ']' 00:05:24.986 13:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.986 13:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:24.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.986 13:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.986 13:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:24.986 13:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.986 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60585) - No such process 00:05:24.986 ERROR: process (pid: 60585) is no longer running 00:05:24.986 13:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:24.986 13:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:24.986 13:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:24.986 13:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:24.986 13:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:24.986 13:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:24.986 13:01:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:24.986 13:01:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:24.986 13:01:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:24.987 13:01:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:24.987 00:05:24.987 real 0m1.937s 00:05:24.987 user 0m2.065s 00:05:24.987 sys 0m0.602s 00:05:24.987 13:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.987 13:01:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.987 ************************************ 00:05:24.987 END TEST default_locks 00:05:24.987 ************************************ 00:05:24.987 13:01:16 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:24.987 13:01:16 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.987 13:01:16 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.987 13:01:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.987 ************************************ 00:05:24.987 START TEST default_locks_via_rpc 00:05:24.987 ************************************ 00:05:24.987 13:01:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:24.987 13:01:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60637 00:05:24.987 13:01:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60637 00:05:24.987 13:01:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60637 ']' 00:05:24.987 13:01:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:24.987 13:01:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.987 13:01:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:24.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.987 13:01:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.987 13:01:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:24.987 13:01:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.987 [2024-07-25 13:01:17.005197] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:24.987 [2024-07-25 13:01:17.005766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60637 ] 00:05:24.987 [2024-07-25 13:01:17.143698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.245 [2024-07-25 13:01:17.252318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.246 [2024-07-25 13:01:17.308766] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:25.813 13:01:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:25.813 13:01:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:25.813 13:01:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:25.813 13:01:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.813 13:01:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.071 13:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.071 13:01:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:26.072 13:01:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:26.072 13:01:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:26.072 13:01:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:26.072 13:01:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:26.072 13:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.072 13:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.072 13:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.072 13:01:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60637 00:05:26.072 13:01:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60637 00:05:26.072 13:01:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:26.330 13:01:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60637 00:05:26.330 13:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 60637 ']' 00:05:26.330 13:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 60637 00:05:26.330 13:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:26.330 13:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:26.330 13:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60637 00:05:26.330 13:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:26.330 13:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:26.330 killing process with pid 60637 00:05:26.330 13:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60637' 00:05:26.330 13:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 60637 00:05:26.330 13:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 60637 00:05:26.896 00:05:26.896 real 0m1.954s 00:05:26.896 user 0m2.098s 00:05:26.896 sys 0m0.602s 00:05:26.896 13:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.896 13:01:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.896 ************************************ 00:05:26.896 END TEST default_locks_via_rpc 00:05:26.896 ************************************ 00:05:26.896 13:01:18 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:26.896 13:01:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.896 13:01:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.896 13:01:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.896 ************************************ 00:05:26.896 START TEST non_locking_app_on_locked_coremask 00:05:26.896 ************************************ 00:05:26.896 13:01:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:26.896 13:01:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60688 00:05:26.896 13:01:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.896 13:01:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60688 /var/tmp/spdk.sock 00:05:26.896 13:01:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60688 ']' 00:05:26.896 13:01:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.896 13:01:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:26.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.896 13:01:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.896 13:01:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:26.896 13:01:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.896 [2024-07-25 13:01:19.005864] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:26.896 [2024-07-25 13:01:19.006006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60688 ] 00:05:27.154 [2024-07-25 13:01:19.137806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.154 [2024-07-25 13:01:19.250652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.154 [2024-07-25 13:01:19.310466] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:28.125 13:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:28.125 13:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:28.125 13:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60704 00:05:28.125 13:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60704 /var/tmp/spdk2.sock 00:05:28.125 13:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60704 ']' 00:05:28.125 13:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:28.125 13:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:28.125 13:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:28.125 13:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:28.125 13:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.125 13:01:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.125 [2024-07-25 13:01:20.058199] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:28.125 [2024-07-25 13:01:20.058315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60704 ] 00:05:28.125 [2024-07-25 13:01:20.196534] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:28.125 [2024-07-25 13:01:20.196596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.383 [2024-07-25 13:01:20.397136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.383 [2024-07-25 13:01:20.512257] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:28.948 13:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:28.948 13:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:28.949 13:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60688 00:05:28.949 13:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60688 00:05:28.949 13:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.883 13:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60688 00:05:29.883 13:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60688 ']' 00:05:29.883 13:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60688 00:05:29.883 13:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:29.883 13:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:29.883 13:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60688 00:05:29.883 13:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:29.883 13:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:29.883 killing process with pid 60688 00:05:29.883 13:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60688' 00:05:29.883 13:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60688 00:05:29.883 13:01:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60688 00:05:30.818 13:01:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60704 00:05:30.818 13:01:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60704 ']' 00:05:30.818 13:01:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60704 00:05:30.818 13:01:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:30.818 13:01:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:30.818 13:01:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60704 00:05:30.818 13:01:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:30.818 13:01:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:30.818 killing process with pid 60704 00:05:30.818 13:01:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60704' 00:05:30.818 13:01:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60704 00:05:30.818 13:01:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60704 00:05:31.076 00:05:31.076 real 0m4.220s 00:05:31.076 user 0m4.627s 00:05:31.076 sys 0m1.182s 00:05:31.076 13:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.076 13:01:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.076 ************************************ 00:05:31.076 END TEST non_locking_app_on_locked_coremask 00:05:31.076 ************************************ 00:05:31.076 13:01:23 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:31.076 13:01:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.076 13:01:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.076 13:01:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.076 ************************************ 00:05:31.076 START TEST locking_app_on_unlocked_coremask 00:05:31.076 ************************************ 00:05:31.076 13:01:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:31.076 13:01:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60771 00:05:31.076 13:01:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60771 /var/tmp/spdk.sock 00:05:31.076 13:01:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:31.076 13:01:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60771 ']' 00:05:31.076 13:01:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.076 13:01:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.076 13:01:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.076 13:01:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.076 13:01:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.334 [2024-07-25 13:01:23.284643] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:31.334 [2024-07-25 13:01:23.285318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60771 ] 00:05:31.334 [2024-07-25 13:01:23.417205] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:31.334 [2024-07-25 13:01:23.417290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.334 [2024-07-25 13:01:23.516549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.592 [2024-07-25 13:01:23.569994] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:32.159 13:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.159 13:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:32.159 13:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60787 00:05:32.159 13:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60787 /var/tmp/spdk2.sock 00:05:32.159 13:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:32.159 13:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60787 ']' 00:05:32.159 13:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.159 13:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.159 13:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.159 13:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.159 13:01:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.159 [2024-07-25 13:01:24.274577] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:32.159 [2024-07-25 13:01:24.274666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60787 ] 00:05:32.417 [2024-07-25 13:01:24.419468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.675 [2024-07-25 13:01:24.628802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.675 [2024-07-25 13:01:24.750119] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:33.241 13:01:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.241 13:01:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:33.241 13:01:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60787 00:05:33.241 13:01:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60787 00:05:33.241 13:01:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:34.201 13:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60771 00:05:34.201 13:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60771 ']' 00:05:34.201 13:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60771 00:05:34.201 13:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:34.201 13:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:34.201 13:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60771 00:05:34.201 13:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:34.201 13:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:34.201 killing process with pid 60771 00:05:34.201 13:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60771' 00:05:34.201 13:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60771 00:05:34.201 13:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60771 00:05:34.780 13:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60787 00:05:34.780 13:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60787 ']' 00:05:34.780 13:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60787 00:05:34.780 13:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:34.780 13:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:34.780 13:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60787 00:05:34.780 13:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:34.780 13:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:34.780 13:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60787' 00:05:34.780 killing process with pid 60787 00:05:34.780 13:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60787 00:05:34.780 13:01:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60787 00:05:35.346 00:05:35.346 real 0m4.074s 00:05:35.346 user 0m4.449s 00:05:35.346 sys 0m1.146s 00:05:35.346 13:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.346 ************************************ 00:05:35.346 END TEST locking_app_on_unlocked_coremask 00:05:35.346 ************************************ 00:05:35.346 13:01:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.346 13:01:27 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:35.346 13:01:27 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.346 13:01:27 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.346 13:01:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.346 ************************************ 00:05:35.346 START TEST locking_app_on_locked_coremask 00:05:35.346 ************************************ 00:05:35.346 13:01:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:35.346 13:01:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60854 00:05:35.346 13:01:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.346 13:01:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60854 /var/tmp/spdk.sock 00:05:35.346 13:01:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60854 ']' 00:05:35.346 13:01:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.346 13:01:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.346 13:01:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.346 13:01:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.346 13:01:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.346 [2024-07-25 13:01:27.410335] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:35.346 [2024-07-25 13:01:27.410420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60854 ] 00:05:35.605 [2024-07-25 13:01:27.549539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.605 [2024-07-25 13:01:27.658433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.605 [2024-07-25 13:01:27.714308] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:36.538 13:01:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.538 13:01:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:36.538 13:01:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60870 00:05:36.538 13:01:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60870 /var/tmp/spdk2.sock 00:05:36.538 13:01:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:36.538 13:01:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60870 /var/tmp/spdk2.sock 00:05:36.538 13:01:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:36.538 13:01:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:36.538 13:01:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:36.538 13:01:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:36.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.538 13:01:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:36.538 13:01:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60870 /var/tmp/spdk2.sock 00:05:36.538 13:01:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60870 ']' 00:05:36.538 13:01:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.538 13:01:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:36.538 13:01:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.538 13:01:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:36.538 13:01:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.538 [2024-07-25 13:01:28.452390] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:36.538 [2024-07-25 13:01:28.452518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60870 ] 00:05:36.538 [2024-07-25 13:01:28.592514] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60854 has claimed it. 00:05:36.538 [2024-07-25 13:01:28.592588] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:37.103 ERROR: process (pid: 60870) is no longer running 00:05:37.103 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60870) - No such process 00:05:37.103 13:01:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:37.103 13:01:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:37.103 13:01:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:37.103 13:01:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:37.103 13:01:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:37.103 13:01:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:37.103 13:01:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60854 00:05:37.103 13:01:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60854 00:05:37.103 13:01:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:37.669 13:01:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60854 00:05:37.669 13:01:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60854 ']' 00:05:37.669 13:01:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60854 00:05:37.669 13:01:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:37.669 13:01:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:37.669 13:01:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60854 00:05:37.669 killing process with pid 60854 00:05:37.669 13:01:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:37.669 13:01:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:37.669 13:01:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60854' 00:05:37.669 13:01:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60854 00:05:37.669 13:01:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60854 00:05:37.927 00:05:37.927 real 0m2.644s 00:05:37.927 user 0m3.019s 00:05:37.927 sys 0m0.675s 00:05:37.927 ************************************ 00:05:37.927 END TEST locking_app_on_locked_coremask 00:05:37.927 ************************************ 00:05:37.927 13:01:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.927 13:01:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.927 13:01:30 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:37.927 13:01:30 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.927 13:01:30 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.927 13:01:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.927 ************************************ 00:05:37.927 START TEST locking_overlapped_coremask 00:05:37.927 ************************************ 00:05:37.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.927 13:01:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:37.927 13:01:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60916 00:05:37.927 13:01:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60916 /var/tmp/spdk.sock 00:05:37.927 13:01:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 60916 ']' 00:05:37.927 13:01:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:37.927 13:01:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.928 13:01:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.928 13:01:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.928 13:01:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.928 13:01:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.928 [2024-07-25 13:01:30.116594] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:37.928 [2024-07-25 13:01:30.116705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60916 ] 00:05:38.186 [2024-07-25 13:01:30.250618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:38.186 [2024-07-25 13:01:30.361264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.186 [2024-07-25 13:01:30.361393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.186 [2024-07-25 13:01:30.361398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.444 [2024-07-25 13:01:30.418772] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:39.010 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.010 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:39.010 13:01:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60934 00:05:39.010 13:01:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:39.010 13:01:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60934 /var/tmp/spdk2.sock 00:05:39.010 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:39.010 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60934 /var/tmp/spdk2.sock 00:05:39.010 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:39.010 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.010 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:39.010 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.010 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60934 /var/tmp/spdk2.sock 00:05:39.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.010 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 60934 ']' 00:05:39.010 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.010 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.010 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.010 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.010 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.010 [2024-07-25 13:01:31.110933] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:39.010 [2024-07-25 13:01:31.111033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60934 ] 00:05:39.268 [2024-07-25 13:01:31.257963] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60916 has claimed it. 00:05:39.268 [2024-07-25 13:01:31.258045] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:39.833 ERROR: process (pid: 60934) is no longer running 00:05:39.833 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60934) - No such process 00:05:39.833 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.833 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:39.833 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:39.833 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:39.833 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:39.833 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:39.833 13:01:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:39.833 13:01:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:39.833 13:01:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:39.833 13:01:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:39.833 13:01:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60916 00:05:39.833 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 60916 ']' 00:05:39.833 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 60916 00:05:39.833 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:39.833 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:39.833 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60916 00:05:39.833 killing process with pid 60916 00:05:39.833 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:39.833 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:39.833 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60916' 00:05:39.833 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 60916 00:05:39.833 13:01:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 60916 00:05:40.091 ************************************ 00:05:40.091 END TEST locking_overlapped_coremask 00:05:40.091 ************************************ 00:05:40.091 00:05:40.091 real 0m2.197s 00:05:40.091 user 0m6.055s 00:05:40.091 sys 0m0.436s 00:05:40.091 13:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.091 13:01:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.349 13:01:32 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:40.349 13:01:32 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.349 13:01:32 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.349 13:01:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.349 ************************************ 00:05:40.349 START TEST locking_overlapped_coremask_via_rpc 00:05:40.349 ************************************ 00:05:40.349 13:01:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:40.349 13:01:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60974 00:05:40.349 13:01:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:40.349 13:01:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60974 /var/tmp/spdk.sock 00:05:40.349 13:01:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60974 ']' 00:05:40.349 13:01:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.349 13:01:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.349 13:01:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.349 13:01:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.349 13:01:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.349 [2024-07-25 13:01:32.355733] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:40.349 [2024-07-25 13:01:32.355892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60974 ] 00:05:40.349 [2024-07-25 13:01:32.489290] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:40.349 [2024-07-25 13:01:32.489338] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:40.607 [2024-07-25 13:01:32.618750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.607 [2024-07-25 13:01:32.618901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:40.607 [2024-07-25 13:01:32.618908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.607 [2024-07-25 13:01:32.675344] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:41.172 13:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.172 13:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:41.172 13:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:41.172 13:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60992 00:05:41.172 13:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60992 /var/tmp/spdk2.sock 00:05:41.172 13:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60992 ']' 00:05:41.172 13:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.172 13:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.172 13:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.172 13:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.172 13:01:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.430 [2024-07-25 13:01:33.388885] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:41.430 [2024-07-25 13:01:33.388998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60992 ] 00:05:41.430 [2024-07-25 13:01:33.537796] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:41.430 [2024-07-25 13:01:33.543905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:41.687 [2024-07-25 13:01:33.748158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:41.687 [2024-07-25 13:01:33.748245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:41.687 [2024-07-25 13:01:33.748247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.687 [2024-07-25 13:01:33.858114] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:42.251 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.251 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:42.251 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:42.251 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.251 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.251 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.251 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:42.251 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:42.251 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:42.251 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:42.251 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:42.251 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:42.251 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:42.251 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:42.251 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.251 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.251 [2024-07-25 13:01:34.295954] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60974 has claimed it. 00:05:42.251 request: 00:05:42.251 { 00:05:42.251 "method": "framework_enable_cpumask_locks", 00:05:42.252 "req_id": 1 00:05:42.252 } 00:05:42.252 Got JSON-RPC error response 00:05:42.252 response: 00:05:42.252 { 00:05:42.252 "code": -32603, 00:05:42.252 "message": "Failed to claim CPU core: 2" 00:05:42.252 } 00:05:42.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.252 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:42.252 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:42.252 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:42.252 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:42.252 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:42.252 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60974 /var/tmp/spdk.sock 00:05:42.252 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60974 ']' 00:05:42.252 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.252 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:42.252 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.252 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:42.252 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:42.510 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.510 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:42.510 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60992 /var/tmp/spdk2.sock 00:05:42.510 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60992 ']' 00:05:42.510 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:42.510 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:42.510 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:42.510 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:42.510 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.767 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.767 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:42.767 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:42.767 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:42.767 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:42.767 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:42.767 00:05:42.767 real 0m2.550s 00:05:42.767 user 0m1.279s 00:05:42.767 sys 0m0.207s 00:05:42.767 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.767 13:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.767 ************************************ 00:05:42.767 END TEST locking_overlapped_coremask_via_rpc 00:05:42.767 ************************************ 00:05:42.767 13:01:34 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:42.767 13:01:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60974 ]] 00:05:42.767 13:01:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60974 00:05:42.768 13:01:34 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60974 ']' 00:05:42.768 13:01:34 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60974 00:05:42.768 13:01:34 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:42.768 13:01:34 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:42.768 13:01:34 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60974 00:05:42.768 killing process with pid 60974 00:05:42.768 13:01:34 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:42.768 13:01:34 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:42.768 13:01:34 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60974' 00:05:42.768 13:01:34 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 60974 00:05:42.768 13:01:34 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 60974 00:05:43.333 13:01:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60992 ]] 00:05:43.333 13:01:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60992 00:05:43.333 13:01:35 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60992 ']' 00:05:43.333 13:01:35 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60992 00:05:43.333 13:01:35 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:43.333 13:01:35 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:43.333 13:01:35 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60992 00:05:43.333 killing process with pid 60992 00:05:43.333 13:01:35 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:43.333 13:01:35 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:43.333 13:01:35 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60992' 00:05:43.333 13:01:35 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 60992 00:05:43.333 13:01:35 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 60992 00:05:43.590 13:01:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:43.590 Process with pid 60974 is not found 00:05:43.590 Process with pid 60992 is not found 00:05:43.590 13:01:35 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:43.590 13:01:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60974 ]] 00:05:43.590 13:01:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60974 00:05:43.590 13:01:35 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60974 ']' 00:05:43.590 13:01:35 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60974 00:05:43.590 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (60974) - No such process 00:05:43.590 13:01:35 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 60974 is not found' 00:05:43.590 13:01:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60992 ]] 00:05:43.590 13:01:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60992 00:05:43.590 13:01:35 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60992 ']' 00:05:43.590 13:01:35 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60992 00:05:43.590 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (60992) - No such process 00:05:43.590 13:01:35 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 60992 is not found' 00:05:43.590 13:01:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:43.590 ************************************ 00:05:43.590 END TEST cpu_locks 00:05:43.590 ************************************ 00:05:43.590 00:05:43.590 real 0m20.864s 00:05:43.590 user 0m35.477s 00:05:43.590 sys 0m5.738s 00:05:43.591 13:01:35 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.591 13:01:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.591 00:05:43.591 real 0m48.752s 00:05:43.591 user 1m32.908s 00:05:43.591 sys 0m9.468s 00:05:43.591 13:01:35 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.591 13:01:35 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.591 ************************************ 00:05:43.591 END TEST event 00:05:43.591 ************************************ 00:05:43.849 13:01:35 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:43.849 13:01:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.849 13:01:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.849 13:01:35 -- common/autotest_common.sh@10 -- # set +x 00:05:43.849 ************************************ 00:05:43.849 START TEST thread 00:05:43.849 ************************************ 00:05:43.849 13:01:35 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:43.849 * Looking for test storage... 00:05:43.849 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:43.849 13:01:35 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:43.849 13:01:35 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:43.849 13:01:35 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.849 13:01:35 thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.849 ************************************ 00:05:43.849 START TEST thread_poller_perf 00:05:43.849 ************************************ 00:05:43.849 13:01:35 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:43.849 [2024-07-25 13:01:35.917057] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:43.849 [2024-07-25 13:01:35.917175] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61114 ] 00:05:44.107 [2024-07-25 13:01:36.055109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.107 [2024-07-25 13:01:36.174646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.107 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:45.483 ====================================== 00:05:45.484 busy:2206844051 (cyc) 00:05:45.484 total_run_count: 354000 00:05:45.484 tsc_hz: 2200000000 (cyc) 00:05:45.484 ====================================== 00:05:45.484 poller_cost: 6234 (cyc), 2833 (nsec) 00:05:45.484 00:05:45.484 real 0m1.359s 00:05:45.484 user 0m1.198s 00:05:45.484 sys 0m0.054s 00:05:45.484 13:01:37 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.484 13:01:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:45.484 ************************************ 00:05:45.484 END TEST thread_poller_perf 00:05:45.484 ************************************ 00:05:45.484 13:01:37 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:45.484 13:01:37 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:45.484 13:01:37 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.484 13:01:37 thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.484 ************************************ 00:05:45.484 START TEST thread_poller_perf 00:05:45.484 ************************************ 00:05:45.484 13:01:37 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:45.484 [2024-07-25 13:01:37.334627] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:45.484 [2024-07-25 13:01:37.334710] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61150 ] 00:05:45.484 [2024-07-25 13:01:37.466354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.484 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:45.484 [2024-07-25 13:01:37.566833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.859 ====================================== 00:05:46.859 busy:2202222524 (cyc) 00:05:46.859 total_run_count: 4281000 00:05:46.859 tsc_hz: 2200000000 (cyc) 00:05:46.859 ====================================== 00:05:46.859 poller_cost: 514 (cyc), 233 (nsec) 00:05:46.859 00:05:46.859 real 0m1.339s 00:05:46.859 user 0m1.180s 00:05:46.859 sys 0m0.052s 00:05:46.859 13:01:38 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.859 13:01:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:46.859 ************************************ 00:05:46.859 END TEST thread_poller_perf 00:05:46.859 ************************************ 00:05:46.859 13:01:38 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:46.859 00:05:46.859 real 0m2.885s 00:05:46.859 user 0m2.441s 00:05:46.859 sys 0m0.223s 00:05:46.859 13:01:38 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.859 13:01:38 thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.859 ************************************ 00:05:46.859 END TEST thread 00:05:46.859 ************************************ 00:05:46.859 13:01:38 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:05:46.859 13:01:38 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:46.859 13:01:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.859 13:01:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.859 13:01:38 -- common/autotest_common.sh@10 -- # set +x 00:05:46.859 ************************************ 00:05:46.859 START TEST app_cmdline 00:05:46.859 ************************************ 00:05:46.859 13:01:38 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:46.859 * Looking for test storage... 00:05:46.859 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:46.859 13:01:38 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:46.859 13:01:38 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61224 00:05:46.859 13:01:38 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61224 00:05:46.859 13:01:38 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:46.859 13:01:38 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 61224 ']' 00:05:46.859 13:01:38 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.859 13:01:38 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.860 13:01:38 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.860 13:01:38 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.860 13:01:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:46.860 [2024-07-25 13:01:38.887097] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:46.860 [2024-07-25 13:01:38.887240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61224 ] 00:05:46.860 [2024-07-25 13:01:39.026546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.118 [2024-07-25 13:01:39.134214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.118 [2024-07-25 13:01:39.195271] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:47.683 13:01:39 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.683 13:01:39 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:47.683 13:01:39 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:47.941 { 00:05:47.941 "version": "SPDK v24.09-pre git sha1 704257090", 00:05:47.941 "fields": { 00:05:47.941 "major": 24, 00:05:47.941 "minor": 9, 00:05:47.941 "patch": 0, 00:05:47.941 "suffix": "-pre", 00:05:47.941 "commit": "704257090" 00:05:47.941 } 00:05:47.941 } 00:05:48.199 13:01:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:48.199 13:01:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:48.199 13:01:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:48.199 13:01:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:48.199 13:01:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:48.199 13:01:40 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:48.199 13:01:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:48.199 13:01:40 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.199 13:01:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:48.199 13:01:40 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.199 13:01:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:48.199 13:01:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:48.199 13:01:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:48.199 13:01:40 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:48.199 13:01:40 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:48.199 13:01:40 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:48.199 13:01:40 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.199 13:01:40 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:48.199 13:01:40 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.199 13:01:40 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:48.199 13:01:40 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.199 13:01:40 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:48.199 13:01:40 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:48.199 13:01:40 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:48.457 request: 00:05:48.457 { 00:05:48.457 "method": "env_dpdk_get_mem_stats", 00:05:48.457 "req_id": 1 00:05:48.457 } 00:05:48.457 Got JSON-RPC error response 00:05:48.457 response: 00:05:48.457 { 00:05:48.457 "code": -32601, 00:05:48.457 "message": "Method not found" 00:05:48.457 } 00:05:48.457 13:01:40 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:48.457 13:01:40 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:48.457 13:01:40 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:48.457 13:01:40 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:48.457 13:01:40 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61224 00:05:48.457 13:01:40 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 61224 ']' 00:05:48.457 13:01:40 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 61224 00:05:48.457 13:01:40 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:48.457 13:01:40 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:48.457 13:01:40 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61224 00:05:48.457 killing process with pid 61224 00:05:48.457 13:01:40 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:48.457 13:01:40 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:48.457 13:01:40 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61224' 00:05:48.457 13:01:40 app_cmdline -- common/autotest_common.sh@969 -- # kill 61224 00:05:48.457 13:01:40 app_cmdline -- common/autotest_common.sh@974 -- # wait 61224 00:05:48.715 00:05:48.715 real 0m2.138s 00:05:48.715 user 0m2.662s 00:05:48.715 sys 0m0.489s 00:05:48.715 13:01:40 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.715 ************************************ 00:05:48.715 END TEST app_cmdline 00:05:48.715 13:01:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:48.715 ************************************ 00:05:48.973 13:01:40 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:48.973 13:01:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.973 13:01:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.973 13:01:40 -- common/autotest_common.sh@10 -- # set +x 00:05:48.973 ************************************ 00:05:48.973 START TEST version 00:05:48.973 ************************************ 00:05:48.973 13:01:40 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:48.973 * Looking for test storage... 00:05:48.973 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:48.973 13:01:41 version -- app/version.sh@17 -- # get_header_version major 00:05:48.973 13:01:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:48.973 13:01:41 version -- app/version.sh@14 -- # cut -f2 00:05:48.973 13:01:41 version -- app/version.sh@14 -- # tr -d '"' 00:05:48.973 13:01:41 version -- app/version.sh@17 -- # major=24 00:05:48.973 13:01:41 version -- app/version.sh@18 -- # get_header_version minor 00:05:48.973 13:01:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:48.973 13:01:41 version -- app/version.sh@14 -- # cut -f2 00:05:48.973 13:01:41 version -- app/version.sh@14 -- # tr -d '"' 00:05:48.973 13:01:41 version -- app/version.sh@18 -- # minor=9 00:05:48.973 13:01:41 version -- app/version.sh@19 -- # get_header_version patch 00:05:48.973 13:01:41 version -- app/version.sh@14 -- # cut -f2 00:05:48.973 13:01:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:48.973 13:01:41 version -- app/version.sh@14 -- # tr -d '"' 00:05:48.973 13:01:41 version -- app/version.sh@19 -- # patch=0 00:05:48.973 13:01:41 version -- app/version.sh@20 -- # get_header_version suffix 00:05:48.973 13:01:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:48.973 13:01:41 version -- app/version.sh@14 -- # cut -f2 00:05:48.973 13:01:41 version -- app/version.sh@14 -- # tr -d '"' 00:05:48.973 13:01:41 version -- app/version.sh@20 -- # suffix=-pre 00:05:48.973 13:01:41 version -- app/version.sh@22 -- # version=24.9 00:05:48.973 13:01:41 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:48.973 13:01:41 version -- app/version.sh@28 -- # version=24.9rc0 00:05:48.973 13:01:41 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:48.973 13:01:41 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:48.973 13:01:41 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:48.973 13:01:41 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:48.973 00:05:48.973 real 0m0.173s 00:05:48.973 user 0m0.108s 00:05:48.973 sys 0m0.095s 00:05:48.973 13:01:41 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.973 ************************************ 00:05:48.973 END TEST version 00:05:48.973 ************************************ 00:05:48.973 13:01:41 version -- common/autotest_common.sh@10 -- # set +x 00:05:48.973 13:01:41 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:05:48.973 13:01:41 -- spdk/autotest.sh@202 -- # uname -s 00:05:48.973 13:01:41 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:05:48.973 13:01:41 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:05:48.973 13:01:41 -- spdk/autotest.sh@203 -- # [[ 1 -eq 1 ]] 00:05:48.973 13:01:41 -- spdk/autotest.sh@209 -- # [[ 0 -eq 0 ]] 00:05:48.973 13:01:41 -- spdk/autotest.sh@210 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:48.973 13:01:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.973 13:01:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.973 13:01:41 -- common/autotest_common.sh@10 -- # set +x 00:05:49.231 ************************************ 00:05:49.231 START TEST spdk_dd 00:05:49.231 ************************************ 00:05:49.231 13:01:41 spdk_dd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:49.231 * Looking for test storage... 00:05:49.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:49.231 13:01:41 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:49.231 13:01:41 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:49.231 13:01:41 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:49.231 13:01:41 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:49.231 13:01:41 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.231 13:01:41 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.231 13:01:41 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.231 13:01:41 spdk_dd -- paths/export.sh@5 -- # export PATH 00:05:49.232 13:01:41 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.232 13:01:41 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:49.489 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:49.489 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:49.489 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:49.489 13:01:41 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:05:49.489 13:01:41 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:05:49.489 13:01:41 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:05:49.489 13:01:41 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:05:49.489 13:01:41 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:05:49.489 13:01:41 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:05:49.489 13:01:41 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:05:49.489 13:01:41 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:05:49.489 13:01:41 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:05:49.489 13:01:41 spdk_dd -- scripts/common.sh@230 -- # local class 00:05:49.489 13:01:41 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:05:49.489 13:01:41 spdk_dd -- scripts/common.sh@232 -- # local progif 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@233 -- # class=01 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@15 -- # local i 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@24 -- # return 0 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@15 -- # local i 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@24 -- # return 0 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:05:49.490 13:01:41 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:49.490 13:01:41 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:05:49.490 13:01:41 spdk_dd -- dd/common.sh@139 -- # local lib 00:05:49.490 13:01:41 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:05:49.490 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.490 13:01:41 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:49.490 13:01:41 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.16.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:05:49.749 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:05:49.750 * spdk_dd linked to liburing 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:49.750 13:01:41 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:05:49.750 13:01:41 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:05:49.751 13:01:41 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:05:49.751 13:01:41 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:05:49.751 13:01:41 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:05:49.751 13:01:41 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:05:49.751 13:01:41 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:05:49.751 13:01:41 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:05:49.751 13:01:41 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:05:49.751 13:01:41 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:05:49.751 13:01:41 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:05:49.751 13:01:41 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:05:49.751 13:01:41 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:05:49.751 13:01:41 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:05:49.751 13:01:41 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:49.751 13:01:41 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:05:49.751 13:01:41 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:05:49.751 13:01:41 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:05:49.751 13:01:41 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:05:49.751 13:01:41 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:05:49.751 13:01:41 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:05:49.751 13:01:41 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:05:49.751 13:01:41 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:05:49.751 13:01:41 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:05:49.751 13:01:41 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:05:49.751 13:01:41 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:05:49.751 13:01:41 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:49.751 13:01:41 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:05:49.751 13:01:41 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:05:49.751 13:01:41 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:05:49.751 13:01:41 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:05:49.751 13:01:41 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:05:49.751 13:01:41 spdk_dd -- dd/common.sh@153 -- # return 0 00:05:49.751 13:01:41 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:05:49.751 13:01:41 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:49.751 13:01:41 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:49.751 13:01:41 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.751 13:01:41 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:49.751 ************************************ 00:05:49.751 START TEST spdk_dd_basic_rw 00:05:49.751 ************************************ 00:05:49.751 13:01:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:49.751 * Looking for test storage... 00:05:49.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:49.751 13:01:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:49.751 13:01:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:49.751 13:01:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:49.751 13:01:41 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:49.751 13:01:41 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.751 13:01:41 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.751 13:01:41 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.751 13:01:41 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:05:49.751 13:01:41 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.751 13:01:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:05:49.751 13:01:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:05:49.751 13:01:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:05:49.751 13:01:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:05:49.751 13:01:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:05:49.751 13:01:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:05:49.751 13:01:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:05:49.751 13:01:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:49.751 13:01:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:49.751 13:01:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:05:49.751 13:01:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:05:49.751 13:01:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:05:49.751 13:01:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:05:50.011 13:01:41 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:05:50.011 13:01:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:05:50.012 13:01:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:05:50.012 13:01:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:05:50.012 13:01:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:05:50.012 13:01:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:05:50.012 13:01:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:05:50.012 13:01:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:50.012 13:01:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:50.012 13:01:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.012 13:01:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:50.012 13:01:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:05:50.012 13:01:42 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:50.012 13:01:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:50.012 ************************************ 00:05:50.012 START TEST dd_bs_lt_native_bs 00:05:50.012 ************************************ 00:05:50.012 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:50.012 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:05:50.012 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:50.012 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:50.012 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:50.012 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:50.012 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:50.012 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:50.012 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:50.012 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:50.012 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:50.012 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:50.012 { 00:05:50.012 "subsystems": [ 00:05:50.012 { 00:05:50.012 "subsystem": "bdev", 00:05:50.012 "config": [ 00:05:50.012 { 00:05:50.012 "params": { 00:05:50.012 "trtype": "pcie", 00:05:50.012 "traddr": "0000:00:10.0", 00:05:50.012 "name": "Nvme0" 00:05:50.012 }, 00:05:50.012 "method": "bdev_nvme_attach_controller" 00:05:50.012 }, 00:05:50.012 { 00:05:50.012 "method": "bdev_wait_for_examine" 00:05:50.012 } 00:05:50.012 ] 00:05:50.012 } 00:05:50.012 ] 00:05:50.012 } 00:05:50.012 [2024-07-25 13:01:42.075603] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:50.012 [2024-07-25 13:01:42.075713] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61546 ] 00:05:50.270 [2024-07-25 13:01:42.213154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.270 [2024-07-25 13:01:42.329936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.270 [2024-07-25 13:01:42.387856] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:50.528 [2024-07-25 13:01:42.494516] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:05:50.528 [2024-07-25 13:01:42.494588] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:50.528 [2024-07-25 13:01:42.623991] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:50.786 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:05:50.786 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:50.786 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:05:50.786 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:05:50.786 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:05:50.786 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:50.786 00:05:50.786 real 0m0.702s 00:05:50.786 user 0m0.495s 00:05:50.786 sys 0m0.157s 00:05:50.786 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.786 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:05:50.786 ************************************ 00:05:50.786 END TEST dd_bs_lt_native_bs 00:05:50.786 ************************************ 00:05:50.786 13:01:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:05:50.786 13:01:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:50.786 13:01:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.786 13:01:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:50.786 ************************************ 00:05:50.786 START TEST dd_rw 00:05:50.786 ************************************ 00:05:50.786 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # basic_rw 4096 00:05:50.786 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:05:50.786 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:05:50.786 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:05:50.786 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:05:50.786 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:50.786 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:50.786 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:50.786 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:50.786 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:50.786 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:50.786 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:50.786 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:50.786 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:50.786 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:50.786 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:50.786 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:50.786 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:50.786 13:01:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:51.352 13:01:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:05:51.352 13:01:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:51.353 13:01:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:51.353 13:01:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:51.353 { 00:05:51.353 "subsystems": [ 00:05:51.353 { 00:05:51.353 "subsystem": "bdev", 00:05:51.353 "config": [ 00:05:51.353 { 00:05:51.353 "params": { 00:05:51.353 "trtype": "pcie", 00:05:51.353 "traddr": "0000:00:10.0", 00:05:51.353 "name": "Nvme0" 00:05:51.353 }, 00:05:51.353 "method": "bdev_nvme_attach_controller" 00:05:51.353 }, 00:05:51.353 { 00:05:51.353 "method": "bdev_wait_for_examine" 00:05:51.353 } 00:05:51.353 ] 00:05:51.353 } 00:05:51.353 ] 00:05:51.353 } 00:05:51.353 [2024-07-25 13:01:43.477606] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:51.353 [2024-07-25 13:01:43.477908] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61583 ] 00:05:51.611 [2024-07-25 13:01:43.614246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.611 [2024-07-25 13:01:43.711354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.611 [2024-07-25 13:01:43.768384] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:52.127  Copying: 60/60 [kB] (average 19 MBps) 00:05:52.127 00:05:52.127 13:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:05:52.127 13:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:52.127 13:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:52.127 13:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:52.127 [2024-07-25 13:01:44.143885] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:52.127 [2024-07-25 13:01:44.143978] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61596 ] 00:05:52.127 { 00:05:52.127 "subsystems": [ 00:05:52.127 { 00:05:52.127 "subsystem": "bdev", 00:05:52.127 "config": [ 00:05:52.127 { 00:05:52.127 "params": { 00:05:52.127 "trtype": "pcie", 00:05:52.127 "traddr": "0000:00:10.0", 00:05:52.127 "name": "Nvme0" 00:05:52.127 }, 00:05:52.127 "method": "bdev_nvme_attach_controller" 00:05:52.127 }, 00:05:52.127 { 00:05:52.127 "method": "bdev_wait_for_examine" 00:05:52.127 } 00:05:52.127 ] 00:05:52.127 } 00:05:52.127 ] 00:05:52.127 } 00:05:52.127 [2024-07-25 13:01:44.282594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.385 [2024-07-25 13:01:44.392682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.385 [2024-07-25 13:01:44.447185] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:52.643  Copying: 60/60 [kB] (average 19 MBps) 00:05:52.643 00:05:52.643 13:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:52.643 13:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:52.644 13:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:52.644 13:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:52.644 13:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:52.644 13:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:52.644 13:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:52.644 13:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:52.644 13:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:52.644 13:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:52.644 13:01:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:52.644 [2024-07-25 13:01:44.819574] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:52.644 [2024-07-25 13:01:44.820206] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61612 ] 00:05:52.644 { 00:05:52.644 "subsystems": [ 00:05:52.644 { 00:05:52.644 "subsystem": "bdev", 00:05:52.644 "config": [ 00:05:52.644 { 00:05:52.644 "params": { 00:05:52.644 "trtype": "pcie", 00:05:52.644 "traddr": "0000:00:10.0", 00:05:52.644 "name": "Nvme0" 00:05:52.644 }, 00:05:52.644 "method": "bdev_nvme_attach_controller" 00:05:52.644 }, 00:05:52.644 { 00:05:52.644 "method": "bdev_wait_for_examine" 00:05:52.644 } 00:05:52.644 ] 00:05:52.644 } 00:05:52.644 ] 00:05:52.644 } 00:05:52.902 [2024-07-25 13:01:44.961714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.902 [2024-07-25 13:01:45.057928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.160 [2024-07-25 13:01:45.111665] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:53.417  Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:53.417 00:05:53.417 13:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:53.417 13:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:53.417 13:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:53.417 13:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:53.417 13:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:53.417 13:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:53.417 13:01:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:53.982 13:01:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:05:53.982 13:01:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:53.982 13:01:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:53.982 13:01:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:53.982 { 00:05:53.982 "subsystems": [ 00:05:53.982 { 00:05:53.982 "subsystem": "bdev", 00:05:53.982 "config": [ 00:05:53.982 { 00:05:53.982 "params": { 00:05:53.982 "trtype": "pcie", 00:05:53.982 "traddr": "0000:00:10.0", 00:05:53.982 "name": "Nvme0" 00:05:53.982 }, 00:05:53.982 "method": "bdev_nvme_attach_controller" 00:05:53.982 }, 00:05:53.982 { 00:05:53.982 "method": "bdev_wait_for_examine" 00:05:53.982 } 00:05:53.982 ] 00:05:53.982 } 00:05:53.982 ] 00:05:53.982 } 00:05:53.982 [2024-07-25 13:01:46.091626] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:53.982 [2024-07-25 13:01:46.092026] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61636 ] 00:05:54.240 [2024-07-25 13:01:46.239745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.240 [2024-07-25 13:01:46.336223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.240 [2024-07-25 13:01:46.393797] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:54.757  Copying: 60/60 [kB] (average 58 MBps) 00:05:54.757 00:05:54.757 13:01:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:05:54.757 13:01:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:54.757 13:01:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:54.757 13:01:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:54.757 { 00:05:54.757 "subsystems": [ 00:05:54.757 { 00:05:54.757 "subsystem": "bdev", 00:05:54.757 "config": [ 00:05:54.757 { 00:05:54.757 "params": { 00:05:54.757 "trtype": "pcie", 00:05:54.757 "traddr": "0000:00:10.0", 00:05:54.757 "name": "Nvme0" 00:05:54.757 }, 00:05:54.757 "method": "bdev_nvme_attach_controller" 00:05:54.757 }, 00:05:54.757 { 00:05:54.757 "method": "bdev_wait_for_examine" 00:05:54.757 } 00:05:54.757 ] 00:05:54.757 } 00:05:54.757 ] 00:05:54.757 } 00:05:54.757 [2024-07-25 13:01:46.764798] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:54.757 [2024-07-25 13:01:46.764934] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61650 ] 00:05:54.757 [2024-07-25 13:01:46.900409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.017 [2024-07-25 13:01:47.023328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.017 [2024-07-25 13:01:47.080419] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:55.275  Copying: 60/60 [kB] (average 29 MBps) 00:05:55.275 00:05:55.275 13:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:55.275 13:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:55.275 13:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:55.275 13:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:55.275 13:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:55.275 13:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:55.275 13:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:55.275 13:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:55.275 13:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:55.275 13:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:55.275 13:01:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:55.532 [2024-07-25 13:01:47.488994] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:55.532 [2024-07-25 13:01:47.489114] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61671 ] 00:05:55.532 { 00:05:55.532 "subsystems": [ 00:05:55.532 { 00:05:55.532 "subsystem": "bdev", 00:05:55.532 "config": [ 00:05:55.532 { 00:05:55.532 "params": { 00:05:55.532 "trtype": "pcie", 00:05:55.532 "traddr": "0000:00:10.0", 00:05:55.532 "name": "Nvme0" 00:05:55.532 }, 00:05:55.532 "method": "bdev_nvme_attach_controller" 00:05:55.532 }, 00:05:55.532 { 00:05:55.532 "method": "bdev_wait_for_examine" 00:05:55.532 } 00:05:55.532 ] 00:05:55.532 } 00:05:55.532 ] 00:05:55.532 } 00:05:55.532 [2024-07-25 13:01:47.626010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.790 [2024-07-25 13:01:47.734491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.790 [2024-07-25 13:01:47.791157] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:56.048  Copying: 1024/1024 [kB] (average 500 MBps) 00:05:56.048 00:05:56.048 13:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:56.048 13:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:56.048 13:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:56.048 13:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:56.048 13:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:56.048 13:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:56.048 13:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:56.048 13:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:56.615 13:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:05:56.615 13:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:56.615 13:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:56.615 13:01:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:56.615 [2024-07-25 13:01:48.747144] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:56.615 [2024-07-25 13:01:48.747239] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61690 ] 00:05:56.615 { 00:05:56.615 "subsystems": [ 00:05:56.615 { 00:05:56.615 "subsystem": "bdev", 00:05:56.615 "config": [ 00:05:56.615 { 00:05:56.615 "params": { 00:05:56.615 "trtype": "pcie", 00:05:56.615 "traddr": "0000:00:10.0", 00:05:56.615 "name": "Nvme0" 00:05:56.615 }, 00:05:56.615 "method": "bdev_nvme_attach_controller" 00:05:56.615 }, 00:05:56.615 { 00:05:56.615 "method": "bdev_wait_for_examine" 00:05:56.615 } 00:05:56.615 ] 00:05:56.615 } 00:05:56.615 ] 00:05:56.615 } 00:05:56.872 [2024-07-25 13:01:48.886024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.872 [2024-07-25 13:01:48.973507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.872 [2024-07-25 13:01:49.028588] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:57.389  Copying: 56/56 [kB] (average 54 MBps) 00:05:57.389 00:05:57.389 13:01:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:05:57.389 13:01:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:57.389 13:01:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:57.389 13:01:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:57.389 [2024-07-25 13:01:49.433378] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:57.389 [2024-07-25 13:01:49.433518] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61704 ] 00:05:57.389 { 00:05:57.389 "subsystems": [ 00:05:57.389 { 00:05:57.389 "subsystem": "bdev", 00:05:57.389 "config": [ 00:05:57.389 { 00:05:57.389 "params": { 00:05:57.389 "trtype": "pcie", 00:05:57.389 "traddr": "0000:00:10.0", 00:05:57.389 "name": "Nvme0" 00:05:57.389 }, 00:05:57.389 "method": "bdev_nvme_attach_controller" 00:05:57.389 }, 00:05:57.389 { 00:05:57.389 "method": "bdev_wait_for_examine" 00:05:57.389 } 00:05:57.389 ] 00:05:57.390 } 00:05:57.390 ] 00:05:57.390 } 00:05:57.390 [2024-07-25 13:01:49.570700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.649 [2024-07-25 13:01:49.666276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.649 [2024-07-25 13:01:49.721517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:57.907  Copying: 56/56 [kB] (average 27 MBps) 00:05:57.907 00:05:57.907 13:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:57.907 13:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:57.907 13:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:57.907 13:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:57.907 13:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:57.907 13:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:57.907 13:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:57.907 13:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:57.907 13:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:57.907 13:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:57.907 13:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:58.165 [2024-07-25 13:01:50.108259] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:58.165 [2024-07-25 13:01:50.108352] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61719 ] 00:05:58.165 { 00:05:58.165 "subsystems": [ 00:05:58.165 { 00:05:58.165 "subsystem": "bdev", 00:05:58.165 "config": [ 00:05:58.165 { 00:05:58.165 "params": { 00:05:58.165 "trtype": "pcie", 00:05:58.165 "traddr": "0000:00:10.0", 00:05:58.165 "name": "Nvme0" 00:05:58.165 }, 00:05:58.165 "method": "bdev_nvme_attach_controller" 00:05:58.165 }, 00:05:58.165 { 00:05:58.165 "method": "bdev_wait_for_examine" 00:05:58.165 } 00:05:58.165 ] 00:05:58.165 } 00:05:58.165 ] 00:05:58.165 } 00:05:58.165 [2024-07-25 13:01:50.246132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.165 [2024-07-25 13:01:50.345546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.423 [2024-07-25 13:01:50.401199] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:58.681  Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:58.681 00:05:58.681 13:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:58.681 13:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:58.681 13:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:58.681 13:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:58.681 13:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:58.681 13:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:58.681 13:01:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:59.248 13:01:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:05:59.248 13:01:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:59.248 13:01:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:59.248 13:01:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:59.248 [2024-07-25 13:01:51.434861] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:59.248 [2024-07-25 13:01:51.435184] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61743 ] 00:05:59.248 { 00:05:59.248 "subsystems": [ 00:05:59.248 { 00:05:59.248 "subsystem": "bdev", 00:05:59.248 "config": [ 00:05:59.248 { 00:05:59.248 "params": { 00:05:59.248 "trtype": "pcie", 00:05:59.248 "traddr": "0000:00:10.0", 00:05:59.248 "name": "Nvme0" 00:05:59.248 }, 00:05:59.248 "method": "bdev_nvme_attach_controller" 00:05:59.248 }, 00:05:59.248 { 00:05:59.248 "method": "bdev_wait_for_examine" 00:05:59.248 } 00:05:59.248 ] 00:05:59.248 } 00:05:59.248 ] 00:05:59.248 } 00:05:59.507 [2024-07-25 13:01:51.574961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.507 [2024-07-25 13:01:51.683154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.766 [2024-07-25 13:01:51.740440] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:00.024  Copying: 56/56 [kB] (average 54 MBps) 00:06:00.024 00:06:00.024 13:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:00.024 13:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:00.024 13:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:00.024 13:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:00.024 { 00:06:00.024 "subsystems": [ 00:06:00.024 { 00:06:00.024 "subsystem": "bdev", 00:06:00.024 "config": [ 00:06:00.024 { 00:06:00.024 "params": { 00:06:00.024 "trtype": "pcie", 00:06:00.024 "traddr": "0000:00:10.0", 00:06:00.024 "name": "Nvme0" 00:06:00.024 }, 00:06:00.024 "method": "bdev_nvme_attach_controller" 00:06:00.024 }, 00:06:00.024 { 00:06:00.024 "method": "bdev_wait_for_examine" 00:06:00.024 } 00:06:00.024 ] 00:06:00.024 } 00:06:00.024 ] 00:06:00.024 } 00:06:00.024 [2024-07-25 13:01:52.121795] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:00.024 [2024-07-25 13:01:52.121907] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61757 ] 00:06:00.283 [2024-07-25 13:01:52.259208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.283 [2024-07-25 13:01:52.359202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.283 [2024-07-25 13:01:52.418176] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:00.801  Copying: 56/56 [kB] (average 54 MBps) 00:06:00.801 00:06:00.801 13:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:00.801 13:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:00.801 13:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:00.801 13:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:00.801 13:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:00.801 13:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:00.801 13:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:00.801 13:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:00.801 13:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:00.801 13:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:00.801 13:01:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:00.801 [2024-07-25 13:01:52.807071] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:00.801 [2024-07-25 13:01:52.807174] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61778 ] 00:06:00.801 { 00:06:00.801 "subsystems": [ 00:06:00.801 { 00:06:00.801 "subsystem": "bdev", 00:06:00.801 "config": [ 00:06:00.801 { 00:06:00.801 "params": { 00:06:00.801 "trtype": "pcie", 00:06:00.801 "traddr": "0000:00:10.0", 00:06:00.801 "name": "Nvme0" 00:06:00.801 }, 00:06:00.801 "method": "bdev_nvme_attach_controller" 00:06:00.801 }, 00:06:00.801 { 00:06:00.801 "method": "bdev_wait_for_examine" 00:06:00.801 } 00:06:00.801 ] 00:06:00.801 } 00:06:00.801 ] 00:06:00.801 } 00:06:00.801 [2024-07-25 13:01:52.945857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.060 [2024-07-25 13:01:53.040294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.060 [2024-07-25 13:01:53.097953] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:01.335  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:01.335 00:06:01.335 13:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:01.335 13:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:01.335 13:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:01.335 13:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:01.335 13:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:01.335 13:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:01.335 13:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:01.335 13:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:01.904 13:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:01.904 13:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:01.904 13:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:01.904 13:01:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:01.904 { 00:06:01.904 "subsystems": [ 00:06:01.904 { 00:06:01.904 "subsystem": "bdev", 00:06:01.904 "config": [ 00:06:01.904 { 00:06:01.904 "params": { 00:06:01.905 "trtype": "pcie", 00:06:01.905 "traddr": "0000:00:10.0", 00:06:01.905 "name": "Nvme0" 00:06:01.905 }, 00:06:01.905 "method": "bdev_nvme_attach_controller" 00:06:01.905 }, 00:06:01.905 { 00:06:01.905 "method": "bdev_wait_for_examine" 00:06:01.905 } 00:06:01.905 ] 00:06:01.905 } 00:06:01.905 ] 00:06:01.905 } 00:06:01.905 [2024-07-25 13:01:53.988296] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:01.905 [2024-07-25 13:01:53.988928] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61797 ] 00:06:02.163 [2024-07-25 13:01:54.133964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.163 [2024-07-25 13:01:54.235291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.163 [2024-07-25 13:01:54.290753] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:02.421  Copying: 48/48 [kB] (average 46 MBps) 00:06:02.421 00:06:02.679 13:01:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:02.679 13:01:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:02.679 13:01:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:02.679 13:01:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:02.679 [2024-07-25 13:01:54.671376] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:02.679 [2024-07-25 13:01:54.671478] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61816 ] 00:06:02.679 { 00:06:02.679 "subsystems": [ 00:06:02.679 { 00:06:02.679 "subsystem": "bdev", 00:06:02.679 "config": [ 00:06:02.679 { 00:06:02.679 "params": { 00:06:02.679 "trtype": "pcie", 00:06:02.679 "traddr": "0000:00:10.0", 00:06:02.679 "name": "Nvme0" 00:06:02.679 }, 00:06:02.679 "method": "bdev_nvme_attach_controller" 00:06:02.679 }, 00:06:02.679 { 00:06:02.679 "method": "bdev_wait_for_examine" 00:06:02.679 } 00:06:02.679 ] 00:06:02.679 } 00:06:02.679 ] 00:06:02.679 } 00:06:02.679 [2024-07-25 13:01:54.809960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.937 [2024-07-25 13:01:54.889824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.937 [2024-07-25 13:01:54.942754] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:03.196  Copying: 48/48 [kB] (average 46 MBps) 00:06:03.196 00:06:03.196 13:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:03.196 13:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:03.196 13:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:03.196 13:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:03.196 13:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:03.196 13:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:03.196 13:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:03.196 13:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:03.196 13:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:03.196 13:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:03.196 13:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:03.196 [2024-07-25 13:01:55.329123] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:03.196 [2024-07-25 13:01:55.329999] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61826 ] 00:06:03.196 { 00:06:03.196 "subsystems": [ 00:06:03.196 { 00:06:03.196 "subsystem": "bdev", 00:06:03.196 "config": [ 00:06:03.196 { 00:06:03.196 "params": { 00:06:03.196 "trtype": "pcie", 00:06:03.196 "traddr": "0000:00:10.0", 00:06:03.196 "name": "Nvme0" 00:06:03.196 }, 00:06:03.196 "method": "bdev_nvme_attach_controller" 00:06:03.196 }, 00:06:03.196 { 00:06:03.196 "method": "bdev_wait_for_examine" 00:06:03.196 } 00:06:03.196 ] 00:06:03.196 } 00:06:03.196 ] 00:06:03.196 } 00:06:03.454 [2024-07-25 13:01:55.469260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.454 [2024-07-25 13:01:55.581865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.454 [2024-07-25 13:01:55.634510] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:03.970  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:03.970 00:06:03.970 13:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:03.970 13:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:03.970 13:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:03.970 13:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:03.970 13:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:03.970 13:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:03.970 13:01:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:04.536 13:01:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:04.536 13:01:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:04.536 13:01:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:04.536 13:01:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:04.536 [2024-07-25 13:01:56.488243] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:04.536 [2024-07-25 13:01:56.488331] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61849 ] 00:06:04.536 { 00:06:04.536 "subsystems": [ 00:06:04.536 { 00:06:04.536 "subsystem": "bdev", 00:06:04.536 "config": [ 00:06:04.536 { 00:06:04.536 "params": { 00:06:04.536 "trtype": "pcie", 00:06:04.536 "traddr": "0000:00:10.0", 00:06:04.536 "name": "Nvme0" 00:06:04.536 }, 00:06:04.536 "method": "bdev_nvme_attach_controller" 00:06:04.536 }, 00:06:04.536 { 00:06:04.536 "method": "bdev_wait_for_examine" 00:06:04.536 } 00:06:04.536 ] 00:06:04.536 } 00:06:04.536 ] 00:06:04.536 } 00:06:04.536 [2024-07-25 13:01:56.622259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.536 [2024-07-25 13:01:56.723241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.794 [2024-07-25 13:01:56.777359] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:05.053  Copying: 48/48 [kB] (average 46 MBps) 00:06:05.053 00:06:05.053 13:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:05.053 13:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:05.053 13:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:05.053 13:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:05.053 [2024-07-25 13:01:57.151280] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:05.053 [2024-07-25 13:01:57.151376] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61864 ] 00:06:05.053 { 00:06:05.053 "subsystems": [ 00:06:05.053 { 00:06:05.053 "subsystem": "bdev", 00:06:05.053 "config": [ 00:06:05.053 { 00:06:05.053 "params": { 00:06:05.053 "trtype": "pcie", 00:06:05.053 "traddr": "0000:00:10.0", 00:06:05.053 "name": "Nvme0" 00:06:05.053 }, 00:06:05.053 "method": "bdev_nvme_attach_controller" 00:06:05.053 }, 00:06:05.053 { 00:06:05.053 "method": "bdev_wait_for_examine" 00:06:05.053 } 00:06:05.053 ] 00:06:05.053 } 00:06:05.053 ] 00:06:05.053 } 00:06:05.311 [2024-07-25 13:01:57.288792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.311 [2024-07-25 13:01:57.369397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.311 [2024-07-25 13:01:57.423402] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:05.570  Copying: 48/48 [kB] (average 46 MBps) 00:06:05.570 00:06:05.570 13:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:05.829 13:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:05.829 13:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:05.829 13:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:05.829 13:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:05.829 13:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:05.829 13:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:05.829 13:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:05.829 13:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:05.829 13:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:05.829 13:01:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:05.829 [2024-07-25 13:01:57.814258] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:05.829 [2024-07-25 13:01:57.814348] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61885 ] 00:06:05.829 { 00:06:05.829 "subsystems": [ 00:06:05.829 { 00:06:05.829 "subsystem": "bdev", 00:06:05.829 "config": [ 00:06:05.829 { 00:06:05.829 "params": { 00:06:05.829 "trtype": "pcie", 00:06:05.829 "traddr": "0000:00:10.0", 00:06:05.829 "name": "Nvme0" 00:06:05.829 }, 00:06:05.829 "method": "bdev_nvme_attach_controller" 00:06:05.829 }, 00:06:05.829 { 00:06:05.829 "method": "bdev_wait_for_examine" 00:06:05.829 } 00:06:05.829 ] 00:06:05.829 } 00:06:05.829 ] 00:06:05.829 } 00:06:05.829 [2024-07-25 13:01:57.947739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.087 [2024-07-25 13:01:58.033278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.087 [2024-07-25 13:01:58.086583] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:06.346  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:06.346 00:06:06.346 ************************************ 00:06:06.346 END TEST dd_rw 00:06:06.346 ************************************ 00:06:06.346 00:06:06.346 real 0m15.634s 00:06:06.346 user 0m11.550s 00:06:06.346 sys 0m5.616s 00:06:06.346 13:01:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.346 13:01:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:06.346 13:01:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:06.346 13:01:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.346 13:01:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.346 13:01:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:06.346 ************************************ 00:06:06.346 START TEST dd_rw_offset 00:06:06.346 ************************************ 00:06:06.346 13:01:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # basic_offset 00:06:06.346 13:01:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:06.346 13:01:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:06.346 13:01:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:06.346 13:01:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:06.346 13:01:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:06.346 13:01:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=8xiz03fjkmyqku3lwxjfj5sajm6lsiq9bwdunyd952ybb9ws66dhyqpjp5xb6e4vvnuh59je5ff9lxd05kley675v0ncmejwwvdg0k2z6whbjal5jl2a7d98isz8m56lk83frhw0vfuspujnl09qx8w7fankxua0z8yei2ju2vhptptk3l0zqsut3hpmfjdvfcus2n7ljxi83nnvouruu057jxx877qq86r0d1ook0wtu7ghdw94zgcchx1ihhkpjzx0uazp3od1ydi5ar0v9e7uvjdmwmb3mytazmn1uc8luc22xj1b2l8lgjlq65yzurctbyg7uvvxywyyct8r9qfa6vi8fvycj0oz60g6nunuq8euoepw53x9zozftgzphs447apx6p9sylddx7fslvnavmg2f32a3d7455y48th17oqu2gew8etag8yog7pula1jp9hcq5p8nvcivaacxf5ii06t8wgw4ocn0i6bnhzull9q2eees09uxfov97lwuohrorcxnyod9xv0q4gidlza4j79ye81uop41m3a5tvxxkkkr17qe9or5bkor5i9f0uanx4i2ts7t8srr7f0g9ksxveh79lyjlc10nx6bgmmgvs0fm2ucprywmeyamzeqgbzdgidjvdjaklo7j3r5ibgj1o6d30wjy1k63pxzlhyk0y8dd69tdaei71g67h6v9u27h0bku0iiit748dh6bcrtw1vp4afqmhuykbvpu5teebm6d9fmqkr0lal6sy3z70t1u0qienuzklx60zmgpkeys8kj7iwbh214v8a6dggay8ntdo8lp8qljryjwsptmb75p84uot7000uoj2ofe0kkgq6ji9ef5jx684tceadjyxqnyq941iusmnwvcjy3h16lfjfyykuf16fvocwn6qf11i81kn58z67p471dueftf4q263qykmbm1f6gpbz2t8uvork5ultx7uwip02o42v1t8m7tk23uazdtxxnmkpu09g0zslb8nftuh4fygnes57dt9crqdyg5pr76hgflw9dx6nekjubvd5jz95xk1fwe9xa0sth11hy2m0i6k2dsu4rwceaac5pl3a2s7l6f4si4r7utrmv912nybqnjdnzgalyvgpxrtzu1etkhzsau0n6r67crmshdj5qi2b2ucl5k4bcgpv6w74r2uyxw1cm96ro2hpblvfdiuzjq6xjjm77jsq2ex9wphxhity6tr2y0u4dqp3fq2tc7f55lai6vvjdz1u2wm7cpfhlrqkbpbtwjhswaaromzhuta8bqt1mu80nnpux4aildpqpc1rqhssh7oj3c78qqy2cm07he908d9rqibrmlb4n73l0pi78lsi4ltziz0zb6iaag71x465j6bx78fsls5gueyvee75abjskmyer83ycn10jnq923v9w526iak7n6iu4ighgvcu7zm2q8hrn6om5vygz0c285kb7m0h2nexckczkzl6j8lmiyz4d3ztfzt5gta9rtnepjmxsvo3p6iqes862uzelo3wbq8xbyz4hzt0ggyl7peqimopljpbco2tp2poa05vvge1a0k0faxjsyvdk7j995zitzfcf6cfgiwb8bzkxren6ci2vy6mjx55bvx5dnj2g0o8mg8c2zyja4i534pc810bpm2dos2k02tvh5nv78llucq4mgbqr5bvjsto8hhg1p4ejt75t7fhgh7uqpvw9rfwnc1n3fn1sl00jn3tgl11tur0nmzuouazk1mwrrb5hcq6shn9m7v9zbd6p5jfc56lmqtp50u13is1kfeehkp0ztdqez2jlthxa5g7xurd16cqkqeuesuixgv14p0d733l47fgruvyzei3ibxch2z05ili9yytornc282cwp5jbh52k8a425qlmv9ok8nqamyjykgw8adh77st9pj9rq2ha0cpv3zsrcgbicffp4eh19ajxx9gy92a2yxfq4ck508ogpdhdn6ui62p4jqf8wqvvqtkmf7fldyc4pujg0z6688idcsuy7dg2brn2bnhx7xj3nne5921gug6i22wt56ulw03i8u88r9w2ml7og6up8fbfczw7pjb8z8a9yq24nbrg3tuhlhl3ixtlpgcqqxpu0coieqvwimgwmvl4cf96221xp9ghs3a16xt0csw6uo319tyacyvbsdpkinn6fhno4uei0t59l3ufqrsuaaewz8spz8hx0cao8uebb46xix62k8qfujxqhwhh4clp4g072ci6rhb9z0yyq13u9p3rbj1mkft29fp8o8mf5e0llo95x9utubv8ikx8e8rps6dolb45dxo1awt8ymad48lk7ho1x5edhta9a3a7smxivwcnbwri9985enao64fc261ep6sd7si4vr4n5k4jilyw7c96waz22yan461y9rgjmcqguojif6nykcm6rci4s7zbt4a1ufbbe6ka16nv5j6k6jg8e3p0refh6w70jpaoeaxyum2on6so2lhg605aq44flbfbzpanucmcufs2re1sdtzddexa9qxa1njher29ykpofohjx26gcpz5chpds09mn7t050cpnkid45vbk5rc7cd8ysl0ctu393unr5k3arm4hr7zqxqvd159mieglq15j6ojwxi2f8s46tg7opv4lcst89y4jz2n7htf28iifh4ux6aq3jwmhtccx9k1fgvwxupa45v0dwrrtneldugjo3d546b8ckd2gky1amedhxjr5ym9uqqbk9q0l89oo1kiay9sb5m4w3nmqt978n1qoktd7wgc6dn8nlf4n4kljgbhgfcnff1apf4xkrhuiyv8biub567ex5h51y28wk23rip52i5v2ojoq45rpuq313qthfomm0i0zxrnvcc4o8gsaiuuqbzh41erkgy1oiwfwxqx2f6mum2vi2sq454lxvjlf92d64lrki8c04ncyajos34yd9a8uypgs4m961m17a1ldp554yhl4y94wq40uwpkzboklm2uejv1qo71hp0bvobv4yxypnp7i7pr4imnltj0naz6jzwx6h9p61qxorfeh9e31bw7wzl2wuum8idfcuuap8huwtswl46yww6o2fbrem0gxa83sae1e765end993nwz3stggydax7wgil89law6vmyzzafnd6bp5ye18tik0hs89rkrio43tj7er567y6om9nohofs5zvfzu6ea6ytup0fppx8nie75awm3k1dj1s27q0t9en3rdynsw6q54gocgs7jkz1xcbpnwxlzsnz326z7f4hiix8dgqw9xb5xumrfj1o0ny1azcria6i8h5oircxl73aq2u0jgrwccw8oyvwxzr1qhco8c6td1bnoqvrujxobz8qaf6tbmx4kyarxq8nr15zlm93ggkedhs1mw2a0vyv4dzvpp30tremh5fiw8hov17ekwz5vhdm7i54ka82lwrgktpmzhtfcl8yjk6awhbdo8ukqx7jygozz5w934jipwbshg8j29rzpbu7kj1xm8sukie6wy5luugu1pie17e2yd56fad3qy7jz3qaxt9w9kg8f7sytd7isc09j19q0qtl6z25fqh2c80bemxv56mph1583tzei4h3ac4gbns4c3a28s6z490tgaiwykmf7vudjv3ru7wbqo65pwdqltglexpp7hoz0808gmaztjy4414q6r5a5irne9yqy004xb8ck1ahvmry8abciylx8j0tscx6gkhbtg3ael9kkugtzgblvlkq6iaqbuomc6gfs70f1pq3tu81x27yhkkkrpz5h3y0n2hajpdi5lqnqohjhf0yybqsu7taw6l74wpxv7hmvs6fey55hs7wx79694pze5x9zajgw7ycgfhvpdpzigkipocr83xf6txd7ftn6mmrnkv2sfuwj1xxid80su744dqhtlfkg8cxzlqapdr22md5dogpslrk548c3sk3qo3eya7856y7j4v1du86hjcpujqp00jvaqt7d75xc6l69st6s4f9nue12na9wmla0phxeoj593tghxoqfyqr7c815w4552la577pw79vwh260ed 00:06:06.346 13:01:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:06.346 13:01:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:06.346 13:01:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:06.346 13:01:58 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:06.605 [2024-07-25 13:01:58.568468] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:06.605 [2024-07-25 13:01:58.568581] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61912 ] 00:06:06.605 { 00:06:06.605 "subsystems": [ 00:06:06.605 { 00:06:06.605 "subsystem": "bdev", 00:06:06.605 "config": [ 00:06:06.605 { 00:06:06.605 "params": { 00:06:06.605 "trtype": "pcie", 00:06:06.605 "traddr": "0000:00:10.0", 00:06:06.605 "name": "Nvme0" 00:06:06.605 }, 00:06:06.605 "method": "bdev_nvme_attach_controller" 00:06:06.605 }, 00:06:06.605 { 00:06:06.605 "method": "bdev_wait_for_examine" 00:06:06.605 } 00:06:06.605 ] 00:06:06.605 } 00:06:06.605 ] 00:06:06.605 } 00:06:06.605 [2024-07-25 13:01:58.707835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.863 [2024-07-25 13:01:58.804781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.863 [2024-07-25 13:01:58.858374] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:07.122  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:07.122 00:06:07.122 13:01:59 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:07.122 13:01:59 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:07.122 13:01:59 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:07.122 13:01:59 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:07.122 [2024-07-25 13:01:59.247783] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:07.122 [2024-07-25 13:01:59.247904] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61929 ] 00:06:07.122 { 00:06:07.122 "subsystems": [ 00:06:07.122 { 00:06:07.122 "subsystem": "bdev", 00:06:07.122 "config": [ 00:06:07.122 { 00:06:07.122 "params": { 00:06:07.122 "trtype": "pcie", 00:06:07.122 "traddr": "0000:00:10.0", 00:06:07.122 "name": "Nvme0" 00:06:07.122 }, 00:06:07.122 "method": "bdev_nvme_attach_controller" 00:06:07.122 }, 00:06:07.122 { 00:06:07.122 "method": "bdev_wait_for_examine" 00:06:07.122 } 00:06:07.122 ] 00:06:07.122 } 00:06:07.122 ] 00:06:07.122 } 00:06:07.393 [2024-07-25 13:01:59.385890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.393 [2024-07-25 13:01:59.470692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.393 [2024-07-25 13:01:59.523596] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:07.676  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:07.676 00:06:07.676 13:01:59 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:07.677 13:01:59 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 8xiz03fjkmyqku3lwxjfj5sajm6lsiq9bwdunyd952ybb9ws66dhyqpjp5xb6e4vvnuh59je5ff9lxd05kley675v0ncmejwwvdg0k2z6whbjal5jl2a7d98isz8m56lk83frhw0vfuspujnl09qx8w7fankxua0z8yei2ju2vhptptk3l0zqsut3hpmfjdvfcus2n7ljxi83nnvouruu057jxx877qq86r0d1ook0wtu7ghdw94zgcchx1ihhkpjzx0uazp3od1ydi5ar0v9e7uvjdmwmb3mytazmn1uc8luc22xj1b2l8lgjlq65yzurctbyg7uvvxywyyct8r9qfa6vi8fvycj0oz60g6nunuq8euoepw53x9zozftgzphs447apx6p9sylddx7fslvnavmg2f32a3d7455y48th17oqu2gew8etag8yog7pula1jp9hcq5p8nvcivaacxf5ii06t8wgw4ocn0i6bnhzull9q2eees09uxfov97lwuohrorcxnyod9xv0q4gidlza4j79ye81uop41m3a5tvxxkkkr17qe9or5bkor5i9f0uanx4i2ts7t8srr7f0g9ksxveh79lyjlc10nx6bgmmgvs0fm2ucprywmeyamzeqgbzdgidjvdjaklo7j3r5ibgj1o6d30wjy1k63pxzlhyk0y8dd69tdaei71g67h6v9u27h0bku0iiit748dh6bcrtw1vp4afqmhuykbvpu5teebm6d9fmqkr0lal6sy3z70t1u0qienuzklx60zmgpkeys8kj7iwbh214v8a6dggay8ntdo8lp8qljryjwsptmb75p84uot7000uoj2ofe0kkgq6ji9ef5jx684tceadjyxqnyq941iusmnwvcjy3h16lfjfyykuf16fvocwn6qf11i81kn58z67p471dueftf4q263qykmbm1f6gpbz2t8uvork5ultx7uwip02o42v1t8m7tk23uazdtxxnmkpu09g0zslb8nftuh4fygnes57dt9crqdyg5pr76hgflw9dx6nekjubvd5jz95xk1fwe9xa0sth11hy2m0i6k2dsu4rwceaac5pl3a2s7l6f4si4r7utrmv912nybqnjdnzgalyvgpxrtzu1etkhzsau0n6r67crmshdj5qi2b2ucl5k4bcgpv6w74r2uyxw1cm96ro2hpblvfdiuzjq6xjjm77jsq2ex9wphxhity6tr2y0u4dqp3fq2tc7f55lai6vvjdz1u2wm7cpfhlrqkbpbtwjhswaaromzhuta8bqt1mu80nnpux4aildpqpc1rqhssh7oj3c78qqy2cm07he908d9rqibrmlb4n73l0pi78lsi4ltziz0zb6iaag71x465j6bx78fsls5gueyvee75abjskmyer83ycn10jnq923v9w526iak7n6iu4ighgvcu7zm2q8hrn6om5vygz0c285kb7m0h2nexckczkzl6j8lmiyz4d3ztfzt5gta9rtnepjmxsvo3p6iqes862uzelo3wbq8xbyz4hzt0ggyl7peqimopljpbco2tp2poa05vvge1a0k0faxjsyvdk7j995zitzfcf6cfgiwb8bzkxren6ci2vy6mjx55bvx5dnj2g0o8mg8c2zyja4i534pc810bpm2dos2k02tvh5nv78llucq4mgbqr5bvjsto8hhg1p4ejt75t7fhgh7uqpvw9rfwnc1n3fn1sl00jn3tgl11tur0nmzuouazk1mwrrb5hcq6shn9m7v9zbd6p5jfc56lmqtp50u13is1kfeehkp0ztdqez2jlthxa5g7xurd16cqkqeuesuixgv14p0d733l47fgruvyzei3ibxch2z05ili9yytornc282cwp5jbh52k8a425qlmv9ok8nqamyjykgw8adh77st9pj9rq2ha0cpv3zsrcgbicffp4eh19ajxx9gy92a2yxfq4ck508ogpdhdn6ui62p4jqf8wqvvqtkmf7fldyc4pujg0z6688idcsuy7dg2brn2bnhx7xj3nne5921gug6i22wt56ulw03i8u88r9w2ml7og6up8fbfczw7pjb8z8a9yq24nbrg3tuhlhl3ixtlpgcqqxpu0coieqvwimgwmvl4cf96221xp9ghs3a16xt0csw6uo319tyacyvbsdpkinn6fhno4uei0t59l3ufqrsuaaewz8spz8hx0cao8uebb46xix62k8qfujxqhwhh4clp4g072ci6rhb9z0yyq13u9p3rbj1mkft29fp8o8mf5e0llo95x9utubv8ikx8e8rps6dolb45dxo1awt8ymad48lk7ho1x5edhta9a3a7smxivwcnbwri9985enao64fc261ep6sd7si4vr4n5k4jilyw7c96waz22yan461y9rgjmcqguojif6nykcm6rci4s7zbt4a1ufbbe6ka16nv5j6k6jg8e3p0refh6w70jpaoeaxyum2on6so2lhg605aq44flbfbzpanucmcufs2re1sdtzddexa9qxa1njher29ykpofohjx26gcpz5chpds09mn7t050cpnkid45vbk5rc7cd8ysl0ctu393unr5k3arm4hr7zqxqvd159mieglq15j6ojwxi2f8s46tg7opv4lcst89y4jz2n7htf28iifh4ux6aq3jwmhtccx9k1fgvwxupa45v0dwrrtneldugjo3d546b8ckd2gky1amedhxjr5ym9uqqbk9q0l89oo1kiay9sb5m4w3nmqt978n1qoktd7wgc6dn8nlf4n4kljgbhgfcnff1apf4xkrhuiyv8biub567ex5h51y28wk23rip52i5v2ojoq45rpuq313qthfomm0i0zxrnvcc4o8gsaiuuqbzh41erkgy1oiwfwxqx2f6mum2vi2sq454lxvjlf92d64lrki8c04ncyajos34yd9a8uypgs4m961m17a1ldp554yhl4y94wq40uwpkzboklm2uejv1qo71hp0bvobv4yxypnp7i7pr4imnltj0naz6jzwx6h9p61qxorfeh9e31bw7wzl2wuum8idfcuuap8huwtswl46yww6o2fbrem0gxa83sae1e765end993nwz3stggydax7wgil89law6vmyzzafnd6bp5ye18tik0hs89rkrio43tj7er567y6om9nohofs5zvfzu6ea6ytup0fppx8nie75awm3k1dj1s27q0t9en3rdynsw6q54gocgs7jkz1xcbpnwxlzsnz326z7f4hiix8dgqw9xb5xumrfj1o0ny1azcria6i8h5oircxl73aq2u0jgrwccw8oyvwxzr1qhco8c6td1bnoqvrujxobz8qaf6tbmx4kyarxq8nr15zlm93ggkedhs1mw2a0vyv4dzvpp30tremh5fiw8hov17ekwz5vhdm7i54ka82lwrgktpmzhtfcl8yjk6awhbdo8ukqx7jygozz5w934jipwbshg8j29rzpbu7kj1xm8sukie6wy5luugu1pie17e2yd56fad3qy7jz3qaxt9w9kg8f7sytd7isc09j19q0qtl6z25fqh2c80bemxv56mph1583tzei4h3ac4gbns4c3a28s6z490tgaiwykmf7vudjv3ru7wbqo65pwdqltglexpp7hoz0808gmaztjy4414q6r5a5irne9yqy004xb8ck1ahvmry8abciylx8j0tscx6gkhbtg3ael9kkugtzgblvlkq6iaqbuomc6gfs70f1pq3tu81x27yhkkkrpz5h3y0n2hajpdi5lqnqohjhf0yybqsu7taw6l74wpxv7hmvs6fey55hs7wx79694pze5x9zajgw7ycgfhvpdpzigkipocr83xf6txd7ftn6mmrnkv2sfuwj1xxid80su744dqhtlfkg8cxzlqapdr22md5dogpslrk548c3sk3qo3eya7856y7j4v1du86hjcpujqp00jvaqt7d75xc6l69st6s4f9nue12na9wmla0phxeoj593tghxoqfyqr7c815w4552la577pw79vwh260ed == \8\x\i\z\0\3\f\j\k\m\y\q\k\u\3\l\w\x\j\f\j\5\s\a\j\m\6\l\s\i\q\9\b\w\d\u\n\y\d\9\5\2\y\b\b\9\w\s\6\6\d\h\y\q\p\j\p\5\x\b\6\e\4\v\v\n\u\h\5\9\j\e\5\f\f\9\l\x\d\0\5\k\l\e\y\6\7\5\v\0\n\c\m\e\j\w\w\v\d\g\0\k\2\z\6\w\h\b\j\a\l\5\j\l\2\a\7\d\9\8\i\s\z\8\m\5\6\l\k\8\3\f\r\h\w\0\v\f\u\s\p\u\j\n\l\0\9\q\x\8\w\7\f\a\n\k\x\u\a\0\z\8\y\e\i\2\j\u\2\v\h\p\t\p\t\k\3\l\0\z\q\s\u\t\3\h\p\m\f\j\d\v\f\c\u\s\2\n\7\l\j\x\i\8\3\n\n\v\o\u\r\u\u\0\5\7\j\x\x\8\7\7\q\q\8\6\r\0\d\1\o\o\k\0\w\t\u\7\g\h\d\w\9\4\z\g\c\c\h\x\1\i\h\h\k\p\j\z\x\0\u\a\z\p\3\o\d\1\y\d\i\5\a\r\0\v\9\e\7\u\v\j\d\m\w\m\b\3\m\y\t\a\z\m\n\1\u\c\8\l\u\c\2\2\x\j\1\b\2\l\8\l\g\j\l\q\6\5\y\z\u\r\c\t\b\y\g\7\u\v\v\x\y\w\y\y\c\t\8\r\9\q\f\a\6\v\i\8\f\v\y\c\j\0\o\z\6\0\g\6\n\u\n\u\q\8\e\u\o\e\p\w\5\3\x\9\z\o\z\f\t\g\z\p\h\s\4\4\7\a\p\x\6\p\9\s\y\l\d\d\x\7\f\s\l\v\n\a\v\m\g\2\f\3\2\a\3\d\7\4\5\5\y\4\8\t\h\1\7\o\q\u\2\g\e\w\8\e\t\a\g\8\y\o\g\7\p\u\l\a\1\j\p\9\h\c\q\5\p\8\n\v\c\i\v\a\a\c\x\f\5\i\i\0\6\t\8\w\g\w\4\o\c\n\0\i\6\b\n\h\z\u\l\l\9\q\2\e\e\e\s\0\9\u\x\f\o\v\9\7\l\w\u\o\h\r\o\r\c\x\n\y\o\d\9\x\v\0\q\4\g\i\d\l\z\a\4\j\7\9\y\e\8\1\u\o\p\4\1\m\3\a\5\t\v\x\x\k\k\k\r\1\7\q\e\9\o\r\5\b\k\o\r\5\i\9\f\0\u\a\n\x\4\i\2\t\s\7\t\8\s\r\r\7\f\0\g\9\k\s\x\v\e\h\7\9\l\y\j\l\c\1\0\n\x\6\b\g\m\m\g\v\s\0\f\m\2\u\c\p\r\y\w\m\e\y\a\m\z\e\q\g\b\z\d\g\i\d\j\v\d\j\a\k\l\o\7\j\3\r\5\i\b\g\j\1\o\6\d\3\0\w\j\y\1\k\6\3\p\x\z\l\h\y\k\0\y\8\d\d\6\9\t\d\a\e\i\7\1\g\6\7\h\6\v\9\u\2\7\h\0\b\k\u\0\i\i\i\t\7\4\8\d\h\6\b\c\r\t\w\1\v\p\4\a\f\q\m\h\u\y\k\b\v\p\u\5\t\e\e\b\m\6\d\9\f\m\q\k\r\0\l\a\l\6\s\y\3\z\7\0\t\1\u\0\q\i\e\n\u\z\k\l\x\6\0\z\m\g\p\k\e\y\s\8\k\j\7\i\w\b\h\2\1\4\v\8\a\6\d\g\g\a\y\8\n\t\d\o\8\l\p\8\q\l\j\r\y\j\w\s\p\t\m\b\7\5\p\8\4\u\o\t\7\0\0\0\u\o\j\2\o\f\e\0\k\k\g\q\6\j\i\9\e\f\5\j\x\6\8\4\t\c\e\a\d\j\y\x\q\n\y\q\9\4\1\i\u\s\m\n\w\v\c\j\y\3\h\1\6\l\f\j\f\y\y\k\u\f\1\6\f\v\o\c\w\n\6\q\f\1\1\i\8\1\k\n\5\8\z\6\7\p\4\7\1\d\u\e\f\t\f\4\q\2\6\3\q\y\k\m\b\m\1\f\6\g\p\b\z\2\t\8\u\v\o\r\k\5\u\l\t\x\7\u\w\i\p\0\2\o\4\2\v\1\t\8\m\7\t\k\2\3\u\a\z\d\t\x\x\n\m\k\p\u\0\9\g\0\z\s\l\b\8\n\f\t\u\h\4\f\y\g\n\e\s\5\7\d\t\9\c\r\q\d\y\g\5\p\r\7\6\h\g\f\l\w\9\d\x\6\n\e\k\j\u\b\v\d\5\j\z\9\5\x\k\1\f\w\e\9\x\a\0\s\t\h\1\1\h\y\2\m\0\i\6\k\2\d\s\u\4\r\w\c\e\a\a\c\5\p\l\3\a\2\s\7\l\6\f\4\s\i\4\r\7\u\t\r\m\v\9\1\2\n\y\b\q\n\j\d\n\z\g\a\l\y\v\g\p\x\r\t\z\u\1\e\t\k\h\z\s\a\u\0\n\6\r\6\7\c\r\m\s\h\d\j\5\q\i\2\b\2\u\c\l\5\k\4\b\c\g\p\v\6\w\7\4\r\2\u\y\x\w\1\c\m\9\6\r\o\2\h\p\b\l\v\f\d\i\u\z\j\q\6\x\j\j\m\7\7\j\s\q\2\e\x\9\w\p\h\x\h\i\t\y\6\t\r\2\y\0\u\4\d\q\p\3\f\q\2\t\c\7\f\5\5\l\a\i\6\v\v\j\d\z\1\u\2\w\m\7\c\p\f\h\l\r\q\k\b\p\b\t\w\j\h\s\w\a\a\r\o\m\z\h\u\t\a\8\b\q\t\1\m\u\8\0\n\n\p\u\x\4\a\i\l\d\p\q\p\c\1\r\q\h\s\s\h\7\o\j\3\c\7\8\q\q\y\2\c\m\0\7\h\e\9\0\8\d\9\r\q\i\b\r\m\l\b\4\n\7\3\l\0\p\i\7\8\l\s\i\4\l\t\z\i\z\0\z\b\6\i\a\a\g\7\1\x\4\6\5\j\6\b\x\7\8\f\s\l\s\5\g\u\e\y\v\e\e\7\5\a\b\j\s\k\m\y\e\r\8\3\y\c\n\1\0\j\n\q\9\2\3\v\9\w\5\2\6\i\a\k\7\n\6\i\u\4\i\g\h\g\v\c\u\7\z\m\2\q\8\h\r\n\6\o\m\5\v\y\g\z\0\c\2\8\5\k\b\7\m\0\h\2\n\e\x\c\k\c\z\k\z\l\6\j\8\l\m\i\y\z\4\d\3\z\t\f\z\t\5\g\t\a\9\r\t\n\e\p\j\m\x\s\v\o\3\p\6\i\q\e\s\8\6\2\u\z\e\l\o\3\w\b\q\8\x\b\y\z\4\h\z\t\0\g\g\y\l\7\p\e\q\i\m\o\p\l\j\p\b\c\o\2\t\p\2\p\o\a\0\5\v\v\g\e\1\a\0\k\0\f\a\x\j\s\y\v\d\k\7\j\9\9\5\z\i\t\z\f\c\f\6\c\f\g\i\w\b\8\b\z\k\x\r\e\n\6\c\i\2\v\y\6\m\j\x\5\5\b\v\x\5\d\n\j\2\g\0\o\8\m\g\8\c\2\z\y\j\a\4\i\5\3\4\p\c\8\1\0\b\p\m\2\d\o\s\2\k\0\2\t\v\h\5\n\v\7\8\l\l\u\c\q\4\m\g\b\q\r\5\b\v\j\s\t\o\8\h\h\g\1\p\4\e\j\t\7\5\t\7\f\h\g\h\7\u\q\p\v\w\9\r\f\w\n\c\1\n\3\f\n\1\s\l\0\0\j\n\3\t\g\l\1\1\t\u\r\0\n\m\z\u\o\u\a\z\k\1\m\w\r\r\b\5\h\c\q\6\s\h\n\9\m\7\v\9\z\b\d\6\p\5\j\f\c\5\6\l\m\q\t\p\5\0\u\1\3\i\s\1\k\f\e\e\h\k\p\0\z\t\d\q\e\z\2\j\l\t\h\x\a\5\g\7\x\u\r\d\1\6\c\q\k\q\e\u\e\s\u\i\x\g\v\1\4\p\0\d\7\3\3\l\4\7\f\g\r\u\v\y\z\e\i\3\i\b\x\c\h\2\z\0\5\i\l\i\9\y\y\t\o\r\n\c\2\8\2\c\w\p\5\j\b\h\5\2\k\8\a\4\2\5\q\l\m\v\9\o\k\8\n\q\a\m\y\j\y\k\g\w\8\a\d\h\7\7\s\t\9\p\j\9\r\q\2\h\a\0\c\p\v\3\z\s\r\c\g\b\i\c\f\f\p\4\e\h\1\9\a\j\x\x\9\g\y\9\2\a\2\y\x\f\q\4\c\k\5\0\8\o\g\p\d\h\d\n\6\u\i\6\2\p\4\j\q\f\8\w\q\v\v\q\t\k\m\f\7\f\l\d\y\c\4\p\u\j\g\0\z\6\6\8\8\i\d\c\s\u\y\7\d\g\2\b\r\n\2\b\n\h\x\7\x\j\3\n\n\e\5\9\2\1\g\u\g\6\i\2\2\w\t\5\6\u\l\w\0\3\i\8\u\8\8\r\9\w\2\m\l\7\o\g\6\u\p\8\f\b\f\c\z\w\7\p\j\b\8\z\8\a\9\y\q\2\4\n\b\r\g\3\t\u\h\l\h\l\3\i\x\t\l\p\g\c\q\q\x\p\u\0\c\o\i\e\q\v\w\i\m\g\w\m\v\l\4\c\f\9\6\2\2\1\x\p\9\g\h\s\3\a\1\6\x\t\0\c\s\w\6\u\o\3\1\9\t\y\a\c\y\v\b\s\d\p\k\i\n\n\6\f\h\n\o\4\u\e\i\0\t\5\9\l\3\u\f\q\r\s\u\a\a\e\w\z\8\s\p\z\8\h\x\0\c\a\o\8\u\e\b\b\4\6\x\i\x\6\2\k\8\q\f\u\j\x\q\h\w\h\h\4\c\l\p\4\g\0\7\2\c\i\6\r\h\b\9\z\0\y\y\q\1\3\u\9\p\3\r\b\j\1\m\k\f\t\2\9\f\p\8\o\8\m\f\5\e\0\l\l\o\9\5\x\9\u\t\u\b\v\8\i\k\x\8\e\8\r\p\s\6\d\o\l\b\4\5\d\x\o\1\a\w\t\8\y\m\a\d\4\8\l\k\7\h\o\1\x\5\e\d\h\t\a\9\a\3\a\7\s\m\x\i\v\w\c\n\b\w\r\i\9\9\8\5\e\n\a\o\6\4\f\c\2\6\1\e\p\6\s\d\7\s\i\4\v\r\4\n\5\k\4\j\i\l\y\w\7\c\9\6\w\a\z\2\2\y\a\n\4\6\1\y\9\r\g\j\m\c\q\g\u\o\j\i\f\6\n\y\k\c\m\6\r\c\i\4\s\7\z\b\t\4\a\1\u\f\b\b\e\6\k\a\1\6\n\v\5\j\6\k\6\j\g\8\e\3\p\0\r\e\f\h\6\w\7\0\j\p\a\o\e\a\x\y\u\m\2\o\n\6\s\o\2\l\h\g\6\0\5\a\q\4\4\f\l\b\f\b\z\p\a\n\u\c\m\c\u\f\s\2\r\e\1\s\d\t\z\d\d\e\x\a\9\q\x\a\1\n\j\h\e\r\2\9\y\k\p\o\f\o\h\j\x\2\6\g\c\p\z\5\c\h\p\d\s\0\9\m\n\7\t\0\5\0\c\p\n\k\i\d\4\5\v\b\k\5\r\c\7\c\d\8\y\s\l\0\c\t\u\3\9\3\u\n\r\5\k\3\a\r\m\4\h\r\7\z\q\x\q\v\d\1\5\9\m\i\e\g\l\q\1\5\j\6\o\j\w\x\i\2\f\8\s\4\6\t\g\7\o\p\v\4\l\c\s\t\8\9\y\4\j\z\2\n\7\h\t\f\2\8\i\i\f\h\4\u\x\6\a\q\3\j\w\m\h\t\c\c\x\9\k\1\f\g\v\w\x\u\p\a\4\5\v\0\d\w\r\r\t\n\e\l\d\u\g\j\o\3\d\5\4\6\b\8\c\k\d\2\g\k\y\1\a\m\e\d\h\x\j\r\5\y\m\9\u\q\q\b\k\9\q\0\l\8\9\o\o\1\k\i\a\y\9\s\b\5\m\4\w\3\n\m\q\t\9\7\8\n\1\q\o\k\t\d\7\w\g\c\6\d\n\8\n\l\f\4\n\4\k\l\j\g\b\h\g\f\c\n\f\f\1\a\p\f\4\x\k\r\h\u\i\y\v\8\b\i\u\b\5\6\7\e\x\5\h\5\1\y\2\8\w\k\2\3\r\i\p\5\2\i\5\v\2\o\j\o\q\4\5\r\p\u\q\3\1\3\q\t\h\f\o\m\m\0\i\0\z\x\r\n\v\c\c\4\o\8\g\s\a\i\u\u\q\b\z\h\4\1\e\r\k\g\y\1\o\i\w\f\w\x\q\x\2\f\6\m\u\m\2\v\i\2\s\q\4\5\4\l\x\v\j\l\f\9\2\d\6\4\l\r\k\i\8\c\0\4\n\c\y\a\j\o\s\3\4\y\d\9\a\8\u\y\p\g\s\4\m\9\6\1\m\1\7\a\1\l\d\p\5\5\4\y\h\l\4\y\9\4\w\q\4\0\u\w\p\k\z\b\o\k\l\m\2\u\e\j\v\1\q\o\7\1\h\p\0\b\v\o\b\v\4\y\x\y\p\n\p\7\i\7\p\r\4\i\m\n\l\t\j\0\n\a\z\6\j\z\w\x\6\h\9\p\6\1\q\x\o\r\f\e\h\9\e\3\1\b\w\7\w\z\l\2\w\u\u\m\8\i\d\f\c\u\u\a\p\8\h\u\w\t\s\w\l\4\6\y\w\w\6\o\2\f\b\r\e\m\0\g\x\a\8\3\s\a\e\1\e\7\6\5\e\n\d\9\9\3\n\w\z\3\s\t\g\g\y\d\a\x\7\w\g\i\l\8\9\l\a\w\6\v\m\y\z\z\a\f\n\d\6\b\p\5\y\e\1\8\t\i\k\0\h\s\8\9\r\k\r\i\o\4\3\t\j\7\e\r\5\6\7\y\6\o\m\9\n\o\h\o\f\s\5\z\v\f\z\u\6\e\a\6\y\t\u\p\0\f\p\p\x\8\n\i\e\7\5\a\w\m\3\k\1\d\j\1\s\2\7\q\0\t\9\e\n\3\r\d\y\n\s\w\6\q\5\4\g\o\c\g\s\7\j\k\z\1\x\c\b\p\n\w\x\l\z\s\n\z\3\2\6\z\7\f\4\h\i\i\x\8\d\g\q\w\9\x\b\5\x\u\m\r\f\j\1\o\0\n\y\1\a\z\c\r\i\a\6\i\8\h\5\o\i\r\c\x\l\7\3\a\q\2\u\0\j\g\r\w\c\c\w\8\o\y\v\w\x\z\r\1\q\h\c\o\8\c\6\t\d\1\b\n\o\q\v\r\u\j\x\o\b\z\8\q\a\f\6\t\b\m\x\4\k\y\a\r\x\q\8\n\r\1\5\z\l\m\9\3\g\g\k\e\d\h\s\1\m\w\2\a\0\v\y\v\4\d\z\v\p\p\3\0\t\r\e\m\h\5\f\i\w\8\h\o\v\1\7\e\k\w\z\5\v\h\d\m\7\i\5\4\k\a\8\2\l\w\r\g\k\t\p\m\z\h\t\f\c\l\8\y\j\k\6\a\w\h\b\d\o\8\u\k\q\x\7\j\y\g\o\z\z\5\w\9\3\4\j\i\p\w\b\s\h\g\8\j\2\9\r\z\p\b\u\7\k\j\1\x\m\8\s\u\k\i\e\6\w\y\5\l\u\u\g\u\1\p\i\e\1\7\e\2\y\d\5\6\f\a\d\3\q\y\7\j\z\3\q\a\x\t\9\w\9\k\g\8\f\7\s\y\t\d\7\i\s\c\0\9\j\1\9\q\0\q\t\l\6\z\2\5\f\q\h\2\c\8\0\b\e\m\x\v\5\6\m\p\h\1\5\8\3\t\z\e\i\4\h\3\a\c\4\g\b\n\s\4\c\3\a\2\8\s\6\z\4\9\0\t\g\a\i\w\y\k\m\f\7\v\u\d\j\v\3\r\u\7\w\b\q\o\6\5\p\w\d\q\l\t\g\l\e\x\p\p\7\h\o\z\0\8\0\8\g\m\a\z\t\j\y\4\4\1\4\q\6\r\5\a\5\i\r\n\e\9\y\q\y\0\0\4\x\b\8\c\k\1\a\h\v\m\r\y\8\a\b\c\i\y\l\x\8\j\0\t\s\c\x\6\g\k\h\b\t\g\3\a\e\l\9\k\k\u\g\t\z\g\b\l\v\l\k\q\6\i\a\q\b\u\o\m\c\6\g\f\s\7\0\f\1\p\q\3\t\u\8\1\x\2\7\y\h\k\k\k\r\p\z\5\h\3\y\0\n\2\h\a\j\p\d\i\5\l\q\n\q\o\h\j\h\f\0\y\y\b\q\s\u\7\t\a\w\6\l\7\4\w\p\x\v\7\h\m\v\s\6\f\e\y\5\5\h\s\7\w\x\7\9\6\9\4\p\z\e\5\x\9\z\a\j\g\w\7\y\c\g\f\h\v\p\d\p\z\i\g\k\i\p\o\c\r\8\3\x\f\6\t\x\d\7\f\t\n\6\m\m\r\n\k\v\2\s\f\u\w\j\1\x\x\i\d\8\0\s\u\7\4\4\d\q\h\t\l\f\k\g\8\c\x\z\l\q\a\p\d\r\2\2\m\d\5\d\o\g\p\s\l\r\k\5\4\8\c\3\s\k\3\q\o\3\e\y\a\7\8\5\6\y\7\j\4\v\1\d\u\8\6\h\j\c\p\u\j\q\p\0\0\j\v\a\q\t\7\d\7\5\x\c\6\l\6\9\s\t\6\s\4\f\9\n\u\e\1\2\n\a\9\w\m\l\a\0\p\h\x\e\o\j\5\9\3\t\g\h\x\o\q\f\y\q\r\7\c\8\1\5\w\4\5\5\2\l\a\5\7\7\p\w\7\9\v\w\h\2\6\0\e\d ]] 00:06:07.677 00:06:07.677 real 0m1.371s 00:06:07.677 user 0m0.956s 00:06:07.677 sys 0m0.587s 00:06:07.677 13:01:59 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.677 13:01:59 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:07.677 ************************************ 00:06:07.677 END TEST dd_rw_offset 00:06:07.677 ************************************ 00:06:07.935 13:01:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:07.935 13:01:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:07.935 13:01:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:07.935 13:01:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:07.935 13:01:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:07.935 13:01:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:07.935 13:01:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:07.935 13:01:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:07.935 13:01:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:07.935 13:01:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:07.935 13:01:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:07.935 [2024-07-25 13:01:59.934348] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:07.935 { 00:06:07.935 "subsystems": [ 00:06:07.935 { 00:06:07.935 "subsystem": "bdev", 00:06:07.935 "config": [ 00:06:07.935 { 00:06:07.935 "params": { 00:06:07.935 "trtype": "pcie", 00:06:07.935 "traddr": "0000:00:10.0", 00:06:07.935 "name": "Nvme0" 00:06:07.935 }, 00:06:07.935 "method": "bdev_nvme_attach_controller" 00:06:07.935 }, 00:06:07.935 { 00:06:07.935 "method": "bdev_wait_for_examine" 00:06:07.935 } 00:06:07.935 ] 00:06:07.935 } 00:06:07.935 ] 00:06:07.935 } 00:06:07.935 [2024-07-25 13:01:59.934434] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61964 ] 00:06:07.935 [2024-07-25 13:02:00.073670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.194 [2024-07-25 13:02:00.168331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.194 [2024-07-25 13:02:00.221518] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:08.452  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:08.452 00:06:08.452 13:02:00 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:08.452 00:06:08.452 real 0m18.838s 00:06:08.452 user 0m13.611s 00:06:08.452 sys 0m6.848s 00:06:08.452 13:02:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.452 13:02:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:08.452 ************************************ 00:06:08.452 END TEST spdk_dd_basic_rw 00:06:08.452 ************************************ 00:06:08.452 13:02:00 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:08.452 13:02:00 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.452 13:02:00 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.452 13:02:00 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:08.452 ************************************ 00:06:08.452 START TEST spdk_dd_posix 00:06:08.452 ************************************ 00:06:08.452 13:02:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:08.711 * Looking for test storage... 00:06:08.711 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:08.711 * First test run, liburing in use 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:08.711 ************************************ 00:06:08.711 START TEST dd_flag_append 00:06:08.711 ************************************ 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # append 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=3vl4ihzj1hrqexo5q8mspj7z2afxfxcr 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=9msb72rpz7tatuvt6yrczfsxo77nxgww 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 3vl4ihzj1hrqexo5q8mspj7z2afxfxcr 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 9msb72rpz7tatuvt6yrczfsxo77nxgww 00:06:08.711 13:02:00 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:08.711 [2024-07-25 13:02:00.771908] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:08.711 [2024-07-25 13:02:00.772518] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62028 ] 00:06:08.970 [2024-07-25 13:02:00.913102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.970 [2024-07-25 13:02:01.020497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.970 [2024-07-25 13:02:01.077875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:09.229  Copying: 32/32 [B] (average 31 kBps) 00:06:09.229 00:06:09.229 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 9msb72rpz7tatuvt6yrczfsxo77nxgww3vl4ihzj1hrqexo5q8mspj7z2afxfxcr == \9\m\s\b\7\2\r\p\z\7\t\a\t\u\v\t\6\y\r\c\z\f\s\x\o\7\7\n\x\g\w\w\3\v\l\4\i\h\z\j\1\h\r\q\e\x\o\5\q\8\m\s\p\j\7\z\2\a\f\x\f\x\c\r ]] 00:06:09.229 00:06:09.229 real 0m0.603s 00:06:09.229 user 0m0.344s 00:06:09.229 sys 0m0.277s 00:06:09.229 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.229 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:09.229 ************************************ 00:06:09.229 END TEST dd_flag_append 00:06:09.229 ************************************ 00:06:09.229 13:02:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:09.229 13:02:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.229 13:02:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.229 13:02:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:09.229 ************************************ 00:06:09.229 START TEST dd_flag_directory 00:06:09.229 ************************************ 00:06:09.229 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # directory 00:06:09.229 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:09.229 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:06:09.229 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:09.229 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.229 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.229 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.229 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.229 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.229 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.229 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.229 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:09.229 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:09.229 [2024-07-25 13:02:01.418160] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:09.229 [2024-07-25 13:02:01.418266] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62051 ] 00:06:09.488 [2024-07-25 13:02:01.553892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.488 [2024-07-25 13:02:01.650676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.747 [2024-07-25 13:02:01.711370] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:09.747 [2024-07-25 13:02:01.747043] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:09.747 [2024-07-25 13:02:01.747110] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:09.747 [2024-07-25 13:02:01.747139] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.747 [2024-07-25 13:02:01.862666] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:10.005 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:06:10.005 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:10.005 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:06:10.005 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:06:10.005 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:06:10.005 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:10.005 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:10.005 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:06:10.005 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:10.005 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.005 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.005 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.005 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.005 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.005 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.005 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.005 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:10.005 13:02:01 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:10.005 [2024-07-25 13:02:02.048170] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:10.006 [2024-07-25 13:02:02.048266] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62066 ] 00:06:10.006 [2024-07-25 13:02:02.186342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.264 [2024-07-25 13:02:02.285217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.264 [2024-07-25 13:02:02.339123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:10.264 [2024-07-25 13:02:02.375409] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:10.264 [2024-07-25 13:02:02.375480] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:10.264 [2024-07-25 13:02:02.375510] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:10.523 [2024-07-25 13:02:02.491706] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:10.523 13:02:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:06:10.523 13:02:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:10.523 13:02:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:06:10.523 13:02:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:06:10.523 13:02:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:06:10.523 13:02:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:10.523 00:06:10.523 real 0m1.233s 00:06:10.523 user 0m0.719s 00:06:10.523 sys 0m0.303s 00:06:10.523 13:02:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.523 ************************************ 00:06:10.523 END TEST dd_flag_directory 00:06:10.523 ************************************ 00:06:10.523 13:02:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:10.523 13:02:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:10.523 13:02:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.523 13:02:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.523 13:02:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:10.523 ************************************ 00:06:10.523 START TEST dd_flag_nofollow 00:06:10.523 ************************************ 00:06:10.523 13:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # nofollow 00:06:10.523 13:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:10.523 13:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:10.523 13:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:10.523 13:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:10.523 13:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:10.523 13:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:06:10.523 13:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:10.523 13:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.523 13:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.523 13:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.523 13:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.523 13:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.523 13:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.523 13:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.523 13:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:10.524 13:02:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:10.524 [2024-07-25 13:02:02.710170] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:10.524 [2024-07-25 13:02:02.710275] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62092 ] 00:06:10.782 [2024-07-25 13:02:02.841646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.782 [2024-07-25 13:02:02.936865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.041 [2024-07-25 13:02:02.994144] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:11.041 [2024-07-25 13:02:03.031458] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:11.041 [2024-07-25 13:02:03.031548] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:11.041 [2024-07-25 13:02:03.031564] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:11.041 [2024-07-25 13:02:03.161145] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:11.299 13:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:06:11.299 13:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:11.299 13:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:06:11.299 13:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:06:11.299 13:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:06:11.299 13:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:11.299 13:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:11.299 13:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:06:11.299 13:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:11.299 13:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:11.299 13:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.299 13:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:11.299 13:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.299 13:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:11.299 13:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.299 13:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:11.299 13:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:11.299 13:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:11.299 [2024-07-25 13:02:03.325605] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:11.299 [2024-07-25 13:02:03.326307] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62104 ] 00:06:11.299 [2024-07-25 13:02:03.468126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.556 [2024-07-25 13:02:03.586369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.556 [2024-07-25 13:02:03.646335] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:11.556 [2024-07-25 13:02:03.678687] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:11.556 [2024-07-25 13:02:03.678760] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:11.556 [2024-07-25 13:02:03.678792] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:11.815 [2024-07-25 13:02:03.798817] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:11.815 13:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:06:11.815 13:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:11.815 13:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:06:11.815 13:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:06:11.815 13:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:06:11.815 13:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:11.815 13:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:11.815 13:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:11.815 13:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:11.815 13:02:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:11.815 [2024-07-25 13:02:03.969580] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:11.815 [2024-07-25 13:02:03.969688] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62117 ] 00:06:12.075 [2024-07-25 13:02:04.100441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.075 [2024-07-25 13:02:04.210562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.075 [2024-07-25 13:02:04.265550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:12.333  Copying: 512/512 [B] (average 500 kBps) 00:06:12.333 00:06:12.334 13:02:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ yjnqzwrjiul9zv6vzhgn86tdezo9ofkg2ti3l4u9e5mohsyws669qzvmt3ob6lnutdtuebckdplcnzwfaj1wu7mh9effeqe1pnrpab229kwoahuspmkzftdrhyw58nhknx082c27t0v6097z2gl1zx4pb6zos1m47k0az0vu2dp5skzrabzn0w1mn3a3yrgnu4fuj434kjpe16w0ysucezd2fh0o9b68cqyjryuojv7fi2z8a17lp4za7hoqtwno93l14sa71ea37q7qn7yzjomdlt1qdgpwdupiq7ifuma73k18rwlas4w2mcajzqaqcxjp2dht184et8pyz7o77qdba87piqm1rm3y5u8uj8e820szvlj4nfg1iweacwvve83cando06c2rgiuypu61so8zkfh11yoip1hnugr2wi49i5zv60f7ii6p91lrysgnl69e0xecf5mp01pdv6eesepgrcr4x86kalhlz7iojhug3pylgwu894i7orhr884 == \y\j\n\q\z\w\r\j\i\u\l\9\z\v\6\v\z\h\g\n\8\6\t\d\e\z\o\9\o\f\k\g\2\t\i\3\l\4\u\9\e\5\m\o\h\s\y\w\s\6\6\9\q\z\v\m\t\3\o\b\6\l\n\u\t\d\t\u\e\b\c\k\d\p\l\c\n\z\w\f\a\j\1\w\u\7\m\h\9\e\f\f\e\q\e\1\p\n\r\p\a\b\2\2\9\k\w\o\a\h\u\s\p\m\k\z\f\t\d\r\h\y\w\5\8\n\h\k\n\x\0\8\2\c\2\7\t\0\v\6\0\9\7\z\2\g\l\1\z\x\4\p\b\6\z\o\s\1\m\4\7\k\0\a\z\0\v\u\2\d\p\5\s\k\z\r\a\b\z\n\0\w\1\m\n\3\a\3\y\r\g\n\u\4\f\u\j\4\3\4\k\j\p\e\1\6\w\0\y\s\u\c\e\z\d\2\f\h\0\o\9\b\6\8\c\q\y\j\r\y\u\o\j\v\7\f\i\2\z\8\a\1\7\l\p\4\z\a\7\h\o\q\t\w\n\o\9\3\l\1\4\s\a\7\1\e\a\3\7\q\7\q\n\7\y\z\j\o\m\d\l\t\1\q\d\g\p\w\d\u\p\i\q\7\i\f\u\m\a\7\3\k\1\8\r\w\l\a\s\4\w\2\m\c\a\j\z\q\a\q\c\x\j\p\2\d\h\t\1\8\4\e\t\8\p\y\z\7\o\7\7\q\d\b\a\8\7\p\i\q\m\1\r\m\3\y\5\u\8\u\j\8\e\8\2\0\s\z\v\l\j\4\n\f\g\1\i\w\e\a\c\w\v\v\e\8\3\c\a\n\d\o\0\6\c\2\r\g\i\u\y\p\u\6\1\s\o\8\z\k\f\h\1\1\y\o\i\p\1\h\n\u\g\r\2\w\i\4\9\i\5\z\v\6\0\f\7\i\i\6\p\9\1\l\r\y\s\g\n\l\6\9\e\0\x\e\c\f\5\m\p\0\1\p\d\v\6\e\e\s\e\p\g\r\c\r\4\x\8\6\k\a\l\h\l\z\7\i\o\j\h\u\g\3\p\y\l\g\w\u\8\9\4\i\7\o\r\h\r\8\8\4 ]] 00:06:12.334 00:06:12.334 real 0m1.869s 00:06:12.334 user 0m1.108s 00:06:12.334 sys 0m0.570s 00:06:12.334 ************************************ 00:06:12.334 END TEST dd_flag_nofollow 00:06:12.334 ************************************ 00:06:12.334 13:02:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.334 13:02:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:12.592 13:02:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:12.592 13:02:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.592 13:02:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.592 13:02:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:12.592 ************************************ 00:06:12.592 START TEST dd_flag_noatime 00:06:12.592 ************************************ 00:06:12.592 13:02:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # noatime 00:06:12.592 13:02:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:12.592 13:02:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:12.592 13:02:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:12.592 13:02:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:12.592 13:02:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:12.592 13:02:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:12.592 13:02:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721912524 00:06:12.592 13:02:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:12.592 13:02:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721912524 00:06:12.592 13:02:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:13.527 13:02:05 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:13.527 [2024-07-25 13:02:05.652927] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:13.527 [2024-07-25 13:02:05.653053] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62156 ] 00:06:13.784 [2024-07-25 13:02:05.796339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.784 [2024-07-25 13:02:05.939519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.042 [2024-07-25 13:02:05.998886] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:14.301  Copying: 512/512 [B] (average 500 kBps) 00:06:14.301 00:06:14.301 13:02:06 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:14.301 13:02:06 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721912524 )) 00:06:14.301 13:02:06 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:14.301 13:02:06 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721912524 )) 00:06:14.301 13:02:06 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:14.301 [2024-07-25 13:02:06.323690] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:14.301 [2024-07-25 13:02:06.323810] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62175 ] 00:06:14.301 [2024-07-25 13:02:06.463524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.560 [2024-07-25 13:02:06.572748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.560 [2024-07-25 13:02:06.629958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:14.819  Copying: 512/512 [B] (average 500 kBps) 00:06:14.819 00:06:14.819 13:02:06 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:14.819 13:02:06 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721912526 )) 00:06:14.819 00:06:14.819 real 0m2.318s 00:06:14.819 user 0m0.742s 00:06:14.819 sys 0m0.626s 00:06:14.819 ************************************ 00:06:14.819 END TEST dd_flag_noatime 00:06:14.819 ************************************ 00:06:14.819 13:02:06 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.819 13:02:06 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:14.819 13:02:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:14.819 13:02:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.819 13:02:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.819 13:02:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:14.819 ************************************ 00:06:14.819 START TEST dd_flags_misc 00:06:14.819 ************************************ 00:06:14.819 13:02:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # io 00:06:14.819 13:02:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:14.819 13:02:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:14.819 13:02:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:14.819 13:02:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:14.819 13:02:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:14.819 13:02:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:14.819 13:02:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:14.819 13:02:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:14.819 13:02:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:14.819 [2024-07-25 13:02:07.002242] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:14.819 [2024-07-25 13:02:07.002532] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62208 ] 00:06:15.077 [2024-07-25 13:02:07.142405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.077 [2024-07-25 13:02:07.242419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.336 [2024-07-25 13:02:07.301743] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:15.595  Copying: 512/512 [B] (average 500 kBps) 00:06:15.595 00:06:15.595 13:02:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ x5w1i0idvzxtwum2323vljg8kb6cjk866mtv5480ana577d2pg9xxihan6jdp0r3tpbtw55lc7w49f50vkofck9538rs0ncqb3nbq9ozi1x343tmqe131oxuru9jitf0ooiujfppcjpvex7izrgkh2v488bdgnmn90f7buc1mss2kdejszsju4778ddivr3wyheo20e1f52axwruefyt5izq3d39y77rt8j1kwo5r61ssjp1mnhgcfqsvipl7mex7pq0shh90x5yuzz6gv01pjvlz1wq2pkfy8pe3zbkuuremfrimkudmzpcu6nqrbxkbstwwjmghhapi9m8lym2y0i5dceojrxsh4xje1qycyecuwdeo8gdfcx69gvu7rd6fmekyh4m6s50h13fv6g1z52jn5sl1ptq9s1pewqvd58064mpqemui4y4cxecum2yoouhsjo0d17cgu8ffdespkzupjgsfe12aprvtawxji4wupclxr3g2sc98dfzyydj == \x\5\w\1\i\0\i\d\v\z\x\t\w\u\m\2\3\2\3\v\l\j\g\8\k\b\6\c\j\k\8\6\6\m\t\v\5\4\8\0\a\n\a\5\7\7\d\2\p\g\9\x\x\i\h\a\n\6\j\d\p\0\r\3\t\p\b\t\w\5\5\l\c\7\w\4\9\f\5\0\v\k\o\f\c\k\9\5\3\8\r\s\0\n\c\q\b\3\n\b\q\9\o\z\i\1\x\3\4\3\t\m\q\e\1\3\1\o\x\u\r\u\9\j\i\t\f\0\o\o\i\u\j\f\p\p\c\j\p\v\e\x\7\i\z\r\g\k\h\2\v\4\8\8\b\d\g\n\m\n\9\0\f\7\b\u\c\1\m\s\s\2\k\d\e\j\s\z\s\j\u\4\7\7\8\d\d\i\v\r\3\w\y\h\e\o\2\0\e\1\f\5\2\a\x\w\r\u\e\f\y\t\5\i\z\q\3\d\3\9\y\7\7\r\t\8\j\1\k\w\o\5\r\6\1\s\s\j\p\1\m\n\h\g\c\f\q\s\v\i\p\l\7\m\e\x\7\p\q\0\s\h\h\9\0\x\5\y\u\z\z\6\g\v\0\1\p\j\v\l\z\1\w\q\2\p\k\f\y\8\p\e\3\z\b\k\u\u\r\e\m\f\r\i\m\k\u\d\m\z\p\c\u\6\n\q\r\b\x\k\b\s\t\w\w\j\m\g\h\h\a\p\i\9\m\8\l\y\m\2\y\0\i\5\d\c\e\o\j\r\x\s\h\4\x\j\e\1\q\y\c\y\e\c\u\w\d\e\o\8\g\d\f\c\x\6\9\g\v\u\7\r\d\6\f\m\e\k\y\h\4\m\6\s\5\0\h\1\3\f\v\6\g\1\z\5\2\j\n\5\s\l\1\p\t\q\9\s\1\p\e\w\q\v\d\5\8\0\6\4\m\p\q\e\m\u\i\4\y\4\c\x\e\c\u\m\2\y\o\o\u\h\s\j\o\0\d\1\7\c\g\u\8\f\f\d\e\s\p\k\z\u\p\j\g\s\f\e\1\2\a\p\r\v\t\a\w\x\j\i\4\w\u\p\c\l\x\r\3\g\2\s\c\9\8\d\f\z\y\y\d\j ]] 00:06:15.595 13:02:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:15.595 13:02:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:15.595 [2024-07-25 13:02:07.610308] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:15.595 [2024-07-25 13:02:07.610416] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62213 ] 00:06:15.595 [2024-07-25 13:02:07.749831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.853 [2024-07-25 13:02:07.857532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.853 [2024-07-25 13:02:07.913354] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:16.113  Copying: 512/512 [B] (average 500 kBps) 00:06:16.113 00:06:16.113 13:02:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ x5w1i0idvzxtwum2323vljg8kb6cjk866mtv5480ana577d2pg9xxihan6jdp0r3tpbtw55lc7w49f50vkofck9538rs0ncqb3nbq9ozi1x343tmqe131oxuru9jitf0ooiujfppcjpvex7izrgkh2v488bdgnmn90f7buc1mss2kdejszsju4778ddivr3wyheo20e1f52axwruefyt5izq3d39y77rt8j1kwo5r61ssjp1mnhgcfqsvipl7mex7pq0shh90x5yuzz6gv01pjvlz1wq2pkfy8pe3zbkuuremfrimkudmzpcu6nqrbxkbstwwjmghhapi9m8lym2y0i5dceojrxsh4xje1qycyecuwdeo8gdfcx69gvu7rd6fmekyh4m6s50h13fv6g1z52jn5sl1ptq9s1pewqvd58064mpqemui4y4cxecum2yoouhsjo0d17cgu8ffdespkzupjgsfe12aprvtawxji4wupclxr3g2sc98dfzyydj == \x\5\w\1\i\0\i\d\v\z\x\t\w\u\m\2\3\2\3\v\l\j\g\8\k\b\6\c\j\k\8\6\6\m\t\v\5\4\8\0\a\n\a\5\7\7\d\2\p\g\9\x\x\i\h\a\n\6\j\d\p\0\r\3\t\p\b\t\w\5\5\l\c\7\w\4\9\f\5\0\v\k\o\f\c\k\9\5\3\8\r\s\0\n\c\q\b\3\n\b\q\9\o\z\i\1\x\3\4\3\t\m\q\e\1\3\1\o\x\u\r\u\9\j\i\t\f\0\o\o\i\u\j\f\p\p\c\j\p\v\e\x\7\i\z\r\g\k\h\2\v\4\8\8\b\d\g\n\m\n\9\0\f\7\b\u\c\1\m\s\s\2\k\d\e\j\s\z\s\j\u\4\7\7\8\d\d\i\v\r\3\w\y\h\e\o\2\0\e\1\f\5\2\a\x\w\r\u\e\f\y\t\5\i\z\q\3\d\3\9\y\7\7\r\t\8\j\1\k\w\o\5\r\6\1\s\s\j\p\1\m\n\h\g\c\f\q\s\v\i\p\l\7\m\e\x\7\p\q\0\s\h\h\9\0\x\5\y\u\z\z\6\g\v\0\1\p\j\v\l\z\1\w\q\2\p\k\f\y\8\p\e\3\z\b\k\u\u\r\e\m\f\r\i\m\k\u\d\m\z\p\c\u\6\n\q\r\b\x\k\b\s\t\w\w\j\m\g\h\h\a\p\i\9\m\8\l\y\m\2\y\0\i\5\d\c\e\o\j\r\x\s\h\4\x\j\e\1\q\y\c\y\e\c\u\w\d\e\o\8\g\d\f\c\x\6\9\g\v\u\7\r\d\6\f\m\e\k\y\h\4\m\6\s\5\0\h\1\3\f\v\6\g\1\z\5\2\j\n\5\s\l\1\p\t\q\9\s\1\p\e\w\q\v\d\5\8\0\6\4\m\p\q\e\m\u\i\4\y\4\c\x\e\c\u\m\2\y\o\o\u\h\s\j\o\0\d\1\7\c\g\u\8\f\f\d\e\s\p\k\z\u\p\j\g\s\f\e\1\2\a\p\r\v\t\a\w\x\j\i\4\w\u\p\c\l\x\r\3\g\2\s\c\9\8\d\f\z\y\y\d\j ]] 00:06:16.113 13:02:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:16.113 13:02:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:16.113 [2024-07-25 13:02:08.225959] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:16.113 [2024-07-25 13:02:08.226062] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62228 ] 00:06:16.371 [2024-07-25 13:02:08.362326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.371 [2024-07-25 13:02:08.457398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.371 [2024-07-25 13:02:08.511556] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:16.629  Copying: 512/512 [B] (average 250 kBps) 00:06:16.629 00:06:16.629 13:02:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ x5w1i0idvzxtwum2323vljg8kb6cjk866mtv5480ana577d2pg9xxihan6jdp0r3tpbtw55lc7w49f50vkofck9538rs0ncqb3nbq9ozi1x343tmqe131oxuru9jitf0ooiujfppcjpvex7izrgkh2v488bdgnmn90f7buc1mss2kdejszsju4778ddivr3wyheo20e1f52axwruefyt5izq3d39y77rt8j1kwo5r61ssjp1mnhgcfqsvipl7mex7pq0shh90x5yuzz6gv01pjvlz1wq2pkfy8pe3zbkuuremfrimkudmzpcu6nqrbxkbstwwjmghhapi9m8lym2y0i5dceojrxsh4xje1qycyecuwdeo8gdfcx69gvu7rd6fmekyh4m6s50h13fv6g1z52jn5sl1ptq9s1pewqvd58064mpqemui4y4cxecum2yoouhsjo0d17cgu8ffdespkzupjgsfe12aprvtawxji4wupclxr3g2sc98dfzyydj == \x\5\w\1\i\0\i\d\v\z\x\t\w\u\m\2\3\2\3\v\l\j\g\8\k\b\6\c\j\k\8\6\6\m\t\v\5\4\8\0\a\n\a\5\7\7\d\2\p\g\9\x\x\i\h\a\n\6\j\d\p\0\r\3\t\p\b\t\w\5\5\l\c\7\w\4\9\f\5\0\v\k\o\f\c\k\9\5\3\8\r\s\0\n\c\q\b\3\n\b\q\9\o\z\i\1\x\3\4\3\t\m\q\e\1\3\1\o\x\u\r\u\9\j\i\t\f\0\o\o\i\u\j\f\p\p\c\j\p\v\e\x\7\i\z\r\g\k\h\2\v\4\8\8\b\d\g\n\m\n\9\0\f\7\b\u\c\1\m\s\s\2\k\d\e\j\s\z\s\j\u\4\7\7\8\d\d\i\v\r\3\w\y\h\e\o\2\0\e\1\f\5\2\a\x\w\r\u\e\f\y\t\5\i\z\q\3\d\3\9\y\7\7\r\t\8\j\1\k\w\o\5\r\6\1\s\s\j\p\1\m\n\h\g\c\f\q\s\v\i\p\l\7\m\e\x\7\p\q\0\s\h\h\9\0\x\5\y\u\z\z\6\g\v\0\1\p\j\v\l\z\1\w\q\2\p\k\f\y\8\p\e\3\z\b\k\u\u\r\e\m\f\r\i\m\k\u\d\m\z\p\c\u\6\n\q\r\b\x\k\b\s\t\w\w\j\m\g\h\h\a\p\i\9\m\8\l\y\m\2\y\0\i\5\d\c\e\o\j\r\x\s\h\4\x\j\e\1\q\y\c\y\e\c\u\w\d\e\o\8\g\d\f\c\x\6\9\g\v\u\7\r\d\6\f\m\e\k\y\h\4\m\6\s\5\0\h\1\3\f\v\6\g\1\z\5\2\j\n\5\s\l\1\p\t\q\9\s\1\p\e\w\q\v\d\5\8\0\6\4\m\p\q\e\m\u\i\4\y\4\c\x\e\c\u\m\2\y\o\o\u\h\s\j\o\0\d\1\7\c\g\u\8\f\f\d\e\s\p\k\z\u\p\j\g\s\f\e\1\2\a\p\r\v\t\a\w\x\j\i\4\w\u\p\c\l\x\r\3\g\2\s\c\9\8\d\f\z\y\y\d\j ]] 00:06:16.629 13:02:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:16.629 13:02:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:16.888 [2024-07-25 13:02:08.828303] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:16.888 [2024-07-25 13:02:08.828392] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62232 ] 00:06:16.888 [2024-07-25 13:02:08.963270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.888 [2024-07-25 13:02:09.077509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.147 [2024-07-25 13:02:09.133244] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:17.405  Copying: 512/512 [B] (average 250 kBps) 00:06:17.405 00:06:17.405 13:02:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ x5w1i0idvzxtwum2323vljg8kb6cjk866mtv5480ana577d2pg9xxihan6jdp0r3tpbtw55lc7w49f50vkofck9538rs0ncqb3nbq9ozi1x343tmqe131oxuru9jitf0ooiujfppcjpvex7izrgkh2v488bdgnmn90f7buc1mss2kdejszsju4778ddivr3wyheo20e1f52axwruefyt5izq3d39y77rt8j1kwo5r61ssjp1mnhgcfqsvipl7mex7pq0shh90x5yuzz6gv01pjvlz1wq2pkfy8pe3zbkuuremfrimkudmzpcu6nqrbxkbstwwjmghhapi9m8lym2y0i5dceojrxsh4xje1qycyecuwdeo8gdfcx69gvu7rd6fmekyh4m6s50h13fv6g1z52jn5sl1ptq9s1pewqvd58064mpqemui4y4cxecum2yoouhsjo0d17cgu8ffdespkzupjgsfe12aprvtawxji4wupclxr3g2sc98dfzyydj == \x\5\w\1\i\0\i\d\v\z\x\t\w\u\m\2\3\2\3\v\l\j\g\8\k\b\6\c\j\k\8\6\6\m\t\v\5\4\8\0\a\n\a\5\7\7\d\2\p\g\9\x\x\i\h\a\n\6\j\d\p\0\r\3\t\p\b\t\w\5\5\l\c\7\w\4\9\f\5\0\v\k\o\f\c\k\9\5\3\8\r\s\0\n\c\q\b\3\n\b\q\9\o\z\i\1\x\3\4\3\t\m\q\e\1\3\1\o\x\u\r\u\9\j\i\t\f\0\o\o\i\u\j\f\p\p\c\j\p\v\e\x\7\i\z\r\g\k\h\2\v\4\8\8\b\d\g\n\m\n\9\0\f\7\b\u\c\1\m\s\s\2\k\d\e\j\s\z\s\j\u\4\7\7\8\d\d\i\v\r\3\w\y\h\e\o\2\0\e\1\f\5\2\a\x\w\r\u\e\f\y\t\5\i\z\q\3\d\3\9\y\7\7\r\t\8\j\1\k\w\o\5\r\6\1\s\s\j\p\1\m\n\h\g\c\f\q\s\v\i\p\l\7\m\e\x\7\p\q\0\s\h\h\9\0\x\5\y\u\z\z\6\g\v\0\1\p\j\v\l\z\1\w\q\2\p\k\f\y\8\p\e\3\z\b\k\u\u\r\e\m\f\r\i\m\k\u\d\m\z\p\c\u\6\n\q\r\b\x\k\b\s\t\w\w\j\m\g\h\h\a\p\i\9\m\8\l\y\m\2\y\0\i\5\d\c\e\o\j\r\x\s\h\4\x\j\e\1\q\y\c\y\e\c\u\w\d\e\o\8\g\d\f\c\x\6\9\g\v\u\7\r\d\6\f\m\e\k\y\h\4\m\6\s\5\0\h\1\3\f\v\6\g\1\z\5\2\j\n\5\s\l\1\p\t\q\9\s\1\p\e\w\q\v\d\5\8\0\6\4\m\p\q\e\m\u\i\4\y\4\c\x\e\c\u\m\2\y\o\o\u\h\s\j\o\0\d\1\7\c\g\u\8\f\f\d\e\s\p\k\z\u\p\j\g\s\f\e\1\2\a\p\r\v\t\a\w\x\j\i\4\w\u\p\c\l\x\r\3\g\2\s\c\9\8\d\f\z\y\y\d\j ]] 00:06:17.405 13:02:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:17.405 13:02:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:17.405 13:02:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:17.405 13:02:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:17.405 13:02:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:17.405 13:02:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:17.405 [2024-07-25 13:02:09.451093] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:17.405 [2024-07-25 13:02:09.451185] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62248 ] 00:06:17.405 [2024-07-25 13:02:09.584392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.663 [2024-07-25 13:02:09.689804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.663 [2024-07-25 13:02:09.746737] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:17.921  Copying: 512/512 [B] (average 500 kBps) 00:06:17.921 00:06:17.921 13:02:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ jox93cl9l3mopmydh90jyqw20apybmp1h3w1gxa5c16b99pfdrzqzidnglp7l2ue3d71nhaw2504kt5klofqpgf6r6fvfdo07inwbwnev1z7w45auvvh01jcbb0iwvhrmwagu7ghibjcl228jsjkuzw30wbgstaivpkfnph8unxftg9b73fk8lmyyhn52xet73z2an3lee1l7nfg40g39dswrpkqmg13fkb15uiw83np2x7p0rkl9y8xzxwp1u9r1ips8cxxt1kkt1054fukfiq80gs9ih3y591btce67g0u724f1ljdn0t27m32wj6m75gb6rgdypvh0n0xp0o0c64c95o6ktqci90rbjwavweo8y2s870vy488fa63xp2nl4abdnbdlcuck6gvygabva8m50chbjkb6rl3ocqawjw9ix10l4y17sy2hz3qm2ohmotn5hdqqt9rv51pxpf6z1uzulxqhqx9qqjd16zs6h182a4s8nr2e19bal9h7r92 == \j\o\x\9\3\c\l\9\l\3\m\o\p\m\y\d\h\9\0\j\y\q\w\2\0\a\p\y\b\m\p\1\h\3\w\1\g\x\a\5\c\1\6\b\9\9\p\f\d\r\z\q\z\i\d\n\g\l\p\7\l\2\u\e\3\d\7\1\n\h\a\w\2\5\0\4\k\t\5\k\l\o\f\q\p\g\f\6\r\6\f\v\f\d\o\0\7\i\n\w\b\w\n\e\v\1\z\7\w\4\5\a\u\v\v\h\0\1\j\c\b\b\0\i\w\v\h\r\m\w\a\g\u\7\g\h\i\b\j\c\l\2\2\8\j\s\j\k\u\z\w\3\0\w\b\g\s\t\a\i\v\p\k\f\n\p\h\8\u\n\x\f\t\g\9\b\7\3\f\k\8\l\m\y\y\h\n\5\2\x\e\t\7\3\z\2\a\n\3\l\e\e\1\l\7\n\f\g\4\0\g\3\9\d\s\w\r\p\k\q\m\g\1\3\f\k\b\1\5\u\i\w\8\3\n\p\2\x\7\p\0\r\k\l\9\y\8\x\z\x\w\p\1\u\9\r\1\i\p\s\8\c\x\x\t\1\k\k\t\1\0\5\4\f\u\k\f\i\q\8\0\g\s\9\i\h\3\y\5\9\1\b\t\c\e\6\7\g\0\u\7\2\4\f\1\l\j\d\n\0\t\2\7\m\3\2\w\j\6\m\7\5\g\b\6\r\g\d\y\p\v\h\0\n\0\x\p\0\o\0\c\6\4\c\9\5\o\6\k\t\q\c\i\9\0\r\b\j\w\a\v\w\e\o\8\y\2\s\8\7\0\v\y\4\8\8\f\a\6\3\x\p\2\n\l\4\a\b\d\n\b\d\l\c\u\c\k\6\g\v\y\g\a\b\v\a\8\m\5\0\c\h\b\j\k\b\6\r\l\3\o\c\q\a\w\j\w\9\i\x\1\0\l\4\y\1\7\s\y\2\h\z\3\q\m\2\o\h\m\o\t\n\5\h\d\q\q\t\9\r\v\5\1\p\x\p\f\6\z\1\u\z\u\l\x\q\h\q\x\9\q\q\j\d\1\6\z\s\6\h\1\8\2\a\4\s\8\n\r\2\e\1\9\b\a\l\9\h\7\r\9\2 ]] 00:06:17.921 13:02:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:17.921 13:02:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:17.921 [2024-07-25 13:02:10.051703] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:17.921 [2024-07-25 13:02:10.051822] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62258 ] 00:06:18.179 [2024-07-25 13:02:10.190203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.179 [2024-07-25 13:02:10.298016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.179 [2024-07-25 13:02:10.355660] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:18.472  Copying: 512/512 [B] (average 500 kBps) 00:06:18.472 00:06:18.472 13:02:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ jox93cl9l3mopmydh90jyqw20apybmp1h3w1gxa5c16b99pfdrzqzidnglp7l2ue3d71nhaw2504kt5klofqpgf6r6fvfdo07inwbwnev1z7w45auvvh01jcbb0iwvhrmwagu7ghibjcl228jsjkuzw30wbgstaivpkfnph8unxftg9b73fk8lmyyhn52xet73z2an3lee1l7nfg40g39dswrpkqmg13fkb15uiw83np2x7p0rkl9y8xzxwp1u9r1ips8cxxt1kkt1054fukfiq80gs9ih3y591btce67g0u724f1ljdn0t27m32wj6m75gb6rgdypvh0n0xp0o0c64c95o6ktqci90rbjwavweo8y2s870vy488fa63xp2nl4abdnbdlcuck6gvygabva8m50chbjkb6rl3ocqawjw9ix10l4y17sy2hz3qm2ohmotn5hdqqt9rv51pxpf6z1uzulxqhqx9qqjd16zs6h182a4s8nr2e19bal9h7r92 == \j\o\x\9\3\c\l\9\l\3\m\o\p\m\y\d\h\9\0\j\y\q\w\2\0\a\p\y\b\m\p\1\h\3\w\1\g\x\a\5\c\1\6\b\9\9\p\f\d\r\z\q\z\i\d\n\g\l\p\7\l\2\u\e\3\d\7\1\n\h\a\w\2\5\0\4\k\t\5\k\l\o\f\q\p\g\f\6\r\6\f\v\f\d\o\0\7\i\n\w\b\w\n\e\v\1\z\7\w\4\5\a\u\v\v\h\0\1\j\c\b\b\0\i\w\v\h\r\m\w\a\g\u\7\g\h\i\b\j\c\l\2\2\8\j\s\j\k\u\z\w\3\0\w\b\g\s\t\a\i\v\p\k\f\n\p\h\8\u\n\x\f\t\g\9\b\7\3\f\k\8\l\m\y\y\h\n\5\2\x\e\t\7\3\z\2\a\n\3\l\e\e\1\l\7\n\f\g\4\0\g\3\9\d\s\w\r\p\k\q\m\g\1\3\f\k\b\1\5\u\i\w\8\3\n\p\2\x\7\p\0\r\k\l\9\y\8\x\z\x\w\p\1\u\9\r\1\i\p\s\8\c\x\x\t\1\k\k\t\1\0\5\4\f\u\k\f\i\q\8\0\g\s\9\i\h\3\y\5\9\1\b\t\c\e\6\7\g\0\u\7\2\4\f\1\l\j\d\n\0\t\2\7\m\3\2\w\j\6\m\7\5\g\b\6\r\g\d\y\p\v\h\0\n\0\x\p\0\o\0\c\6\4\c\9\5\o\6\k\t\q\c\i\9\0\r\b\j\w\a\v\w\e\o\8\y\2\s\8\7\0\v\y\4\8\8\f\a\6\3\x\p\2\n\l\4\a\b\d\n\b\d\l\c\u\c\k\6\g\v\y\g\a\b\v\a\8\m\5\0\c\h\b\j\k\b\6\r\l\3\o\c\q\a\w\j\w\9\i\x\1\0\l\4\y\1\7\s\y\2\h\z\3\q\m\2\o\h\m\o\t\n\5\h\d\q\q\t\9\r\v\5\1\p\x\p\f\6\z\1\u\z\u\l\x\q\h\q\x\9\q\q\j\d\1\6\z\s\6\h\1\8\2\a\4\s\8\n\r\2\e\1\9\b\a\l\9\h\7\r\9\2 ]] 00:06:18.472 13:02:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:18.472 13:02:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:18.739 [2024-07-25 13:02:10.683389] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:18.739 [2024-07-25 13:02:10.683483] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62267 ] 00:06:18.739 [2024-07-25 13:02:10.821366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.997 [2024-07-25 13:02:10.941340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.997 [2024-07-25 13:02:10.998814] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:19.256  Copying: 512/512 [B] (average 125 kBps) 00:06:19.256 00:06:19.256 13:02:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ jox93cl9l3mopmydh90jyqw20apybmp1h3w1gxa5c16b99pfdrzqzidnglp7l2ue3d71nhaw2504kt5klofqpgf6r6fvfdo07inwbwnev1z7w45auvvh01jcbb0iwvhrmwagu7ghibjcl228jsjkuzw30wbgstaivpkfnph8unxftg9b73fk8lmyyhn52xet73z2an3lee1l7nfg40g39dswrpkqmg13fkb15uiw83np2x7p0rkl9y8xzxwp1u9r1ips8cxxt1kkt1054fukfiq80gs9ih3y591btce67g0u724f1ljdn0t27m32wj6m75gb6rgdypvh0n0xp0o0c64c95o6ktqci90rbjwavweo8y2s870vy488fa63xp2nl4abdnbdlcuck6gvygabva8m50chbjkb6rl3ocqawjw9ix10l4y17sy2hz3qm2ohmotn5hdqqt9rv51pxpf6z1uzulxqhqx9qqjd16zs6h182a4s8nr2e19bal9h7r92 == \j\o\x\9\3\c\l\9\l\3\m\o\p\m\y\d\h\9\0\j\y\q\w\2\0\a\p\y\b\m\p\1\h\3\w\1\g\x\a\5\c\1\6\b\9\9\p\f\d\r\z\q\z\i\d\n\g\l\p\7\l\2\u\e\3\d\7\1\n\h\a\w\2\5\0\4\k\t\5\k\l\o\f\q\p\g\f\6\r\6\f\v\f\d\o\0\7\i\n\w\b\w\n\e\v\1\z\7\w\4\5\a\u\v\v\h\0\1\j\c\b\b\0\i\w\v\h\r\m\w\a\g\u\7\g\h\i\b\j\c\l\2\2\8\j\s\j\k\u\z\w\3\0\w\b\g\s\t\a\i\v\p\k\f\n\p\h\8\u\n\x\f\t\g\9\b\7\3\f\k\8\l\m\y\y\h\n\5\2\x\e\t\7\3\z\2\a\n\3\l\e\e\1\l\7\n\f\g\4\0\g\3\9\d\s\w\r\p\k\q\m\g\1\3\f\k\b\1\5\u\i\w\8\3\n\p\2\x\7\p\0\r\k\l\9\y\8\x\z\x\w\p\1\u\9\r\1\i\p\s\8\c\x\x\t\1\k\k\t\1\0\5\4\f\u\k\f\i\q\8\0\g\s\9\i\h\3\y\5\9\1\b\t\c\e\6\7\g\0\u\7\2\4\f\1\l\j\d\n\0\t\2\7\m\3\2\w\j\6\m\7\5\g\b\6\r\g\d\y\p\v\h\0\n\0\x\p\0\o\0\c\6\4\c\9\5\o\6\k\t\q\c\i\9\0\r\b\j\w\a\v\w\e\o\8\y\2\s\8\7\0\v\y\4\8\8\f\a\6\3\x\p\2\n\l\4\a\b\d\n\b\d\l\c\u\c\k\6\g\v\y\g\a\b\v\a\8\m\5\0\c\h\b\j\k\b\6\r\l\3\o\c\q\a\w\j\w\9\i\x\1\0\l\4\y\1\7\s\y\2\h\z\3\q\m\2\o\h\m\o\t\n\5\h\d\q\q\t\9\r\v\5\1\p\x\p\f\6\z\1\u\z\u\l\x\q\h\q\x\9\q\q\j\d\1\6\z\s\6\h\1\8\2\a\4\s\8\n\r\2\e\1\9\b\a\l\9\h\7\r\9\2 ]] 00:06:19.256 13:02:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:19.256 13:02:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:19.256 [2024-07-25 13:02:11.302666] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:19.256 [2024-07-25 13:02:11.302758] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62282 ] 00:06:19.256 [2024-07-25 13:02:11.440648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.515 [2024-07-25 13:02:11.538372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.515 [2024-07-25 13:02:11.594136] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:19.773  Copying: 512/512 [B] (average 250 kBps) 00:06:19.773 00:06:19.773 ************************************ 00:06:19.773 END TEST dd_flags_misc 00:06:19.773 ************************************ 00:06:19.773 13:02:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ jox93cl9l3mopmydh90jyqw20apybmp1h3w1gxa5c16b99pfdrzqzidnglp7l2ue3d71nhaw2504kt5klofqpgf6r6fvfdo07inwbwnev1z7w45auvvh01jcbb0iwvhrmwagu7ghibjcl228jsjkuzw30wbgstaivpkfnph8unxftg9b73fk8lmyyhn52xet73z2an3lee1l7nfg40g39dswrpkqmg13fkb15uiw83np2x7p0rkl9y8xzxwp1u9r1ips8cxxt1kkt1054fukfiq80gs9ih3y591btce67g0u724f1ljdn0t27m32wj6m75gb6rgdypvh0n0xp0o0c64c95o6ktqci90rbjwavweo8y2s870vy488fa63xp2nl4abdnbdlcuck6gvygabva8m50chbjkb6rl3ocqawjw9ix10l4y17sy2hz3qm2ohmotn5hdqqt9rv51pxpf6z1uzulxqhqx9qqjd16zs6h182a4s8nr2e19bal9h7r92 == \j\o\x\9\3\c\l\9\l\3\m\o\p\m\y\d\h\9\0\j\y\q\w\2\0\a\p\y\b\m\p\1\h\3\w\1\g\x\a\5\c\1\6\b\9\9\p\f\d\r\z\q\z\i\d\n\g\l\p\7\l\2\u\e\3\d\7\1\n\h\a\w\2\5\0\4\k\t\5\k\l\o\f\q\p\g\f\6\r\6\f\v\f\d\o\0\7\i\n\w\b\w\n\e\v\1\z\7\w\4\5\a\u\v\v\h\0\1\j\c\b\b\0\i\w\v\h\r\m\w\a\g\u\7\g\h\i\b\j\c\l\2\2\8\j\s\j\k\u\z\w\3\0\w\b\g\s\t\a\i\v\p\k\f\n\p\h\8\u\n\x\f\t\g\9\b\7\3\f\k\8\l\m\y\y\h\n\5\2\x\e\t\7\3\z\2\a\n\3\l\e\e\1\l\7\n\f\g\4\0\g\3\9\d\s\w\r\p\k\q\m\g\1\3\f\k\b\1\5\u\i\w\8\3\n\p\2\x\7\p\0\r\k\l\9\y\8\x\z\x\w\p\1\u\9\r\1\i\p\s\8\c\x\x\t\1\k\k\t\1\0\5\4\f\u\k\f\i\q\8\0\g\s\9\i\h\3\y\5\9\1\b\t\c\e\6\7\g\0\u\7\2\4\f\1\l\j\d\n\0\t\2\7\m\3\2\w\j\6\m\7\5\g\b\6\r\g\d\y\p\v\h\0\n\0\x\p\0\o\0\c\6\4\c\9\5\o\6\k\t\q\c\i\9\0\r\b\j\w\a\v\w\e\o\8\y\2\s\8\7\0\v\y\4\8\8\f\a\6\3\x\p\2\n\l\4\a\b\d\n\b\d\l\c\u\c\k\6\g\v\y\g\a\b\v\a\8\m\5\0\c\h\b\j\k\b\6\r\l\3\o\c\q\a\w\j\w\9\i\x\1\0\l\4\y\1\7\s\y\2\h\z\3\q\m\2\o\h\m\o\t\n\5\h\d\q\q\t\9\r\v\5\1\p\x\p\f\6\z\1\u\z\u\l\x\q\h\q\x\9\q\q\j\d\1\6\z\s\6\h\1\8\2\a\4\s\8\n\r\2\e\1\9\b\a\l\9\h\7\r\9\2 ]] 00:06:19.773 00:06:19.773 real 0m4.894s 00:06:19.773 user 0m2.810s 00:06:19.773 sys 0m2.323s 00:06:19.773 13:02:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.773 13:02:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:19.773 13:02:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:19.773 13:02:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:19.773 * Second test run, disabling liburing, forcing AIO 00:06:19.773 13:02:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:19.773 13:02:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:19.773 13:02:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.773 13:02:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.774 13:02:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:19.774 ************************************ 00:06:19.774 START TEST dd_flag_append_forced_aio 00:06:19.774 ************************************ 00:06:19.774 13:02:11 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # append 00:06:19.774 13:02:11 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:19.774 13:02:11 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:19.774 13:02:11 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:19.774 13:02:11 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:19.774 13:02:11 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:19.774 13:02:11 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=so72km5ta9qvh6t14icwgydttcd11w1w 00:06:19.774 13:02:11 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:19.774 13:02:11 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:19.774 13:02:11 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:19.774 13:02:11 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=n2sjdxfiqv95qw09nvxknrxl5jr8dr5q 00:06:19.774 13:02:11 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s so72km5ta9qvh6t14icwgydttcd11w1w 00:06:19.774 13:02:11 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s n2sjdxfiqv95qw09nvxknrxl5jr8dr5q 00:06:19.774 13:02:11 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:19.774 [2024-07-25 13:02:11.947576] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:19.774 [2024-07-25 13:02:11.947698] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62311 ] 00:06:20.033 [2024-07-25 13:02:12.087244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.033 [2024-07-25 13:02:12.208136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.291 [2024-07-25 13:02:12.271243] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:20.550  Copying: 32/32 [B] (average 31 kBps) 00:06:20.550 00:06:20.550 13:02:12 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ n2sjdxfiqv95qw09nvxknrxl5jr8dr5qso72km5ta9qvh6t14icwgydttcd11w1w == \n\2\s\j\d\x\f\i\q\v\9\5\q\w\0\9\n\v\x\k\n\r\x\l\5\j\r\8\d\r\5\q\s\o\7\2\k\m\5\t\a\9\q\v\h\6\t\1\4\i\c\w\g\y\d\t\t\c\d\1\1\w\1\w ]] 00:06:20.550 00:06:20.550 real 0m0.688s 00:06:20.550 user 0m0.402s 00:06:20.550 sys 0m0.166s 00:06:20.550 13:02:12 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.550 ************************************ 00:06:20.550 END TEST dd_flag_append_forced_aio 00:06:20.550 ************************************ 00:06:20.550 13:02:12 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:20.550 13:02:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:20.550 13:02:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.550 13:02:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.550 13:02:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:20.550 ************************************ 00:06:20.550 START TEST dd_flag_directory_forced_aio 00:06:20.550 ************************************ 00:06:20.550 13:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # directory 00:06:20.550 13:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:20.550 13:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:20.550 13:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:20.550 13:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:20.550 13:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.550 13:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:20.550 13:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.550 13:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:20.550 13:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.550 13:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:20.550 13:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:20.550 13:02:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:20.550 [2024-07-25 13:02:12.683241] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:20.550 [2024-07-25 13:02:12.683354] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62337 ] 00:06:20.809 [2024-07-25 13:02:12.822861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.809 [2024-07-25 13:02:12.939104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.809 [2024-07-25 13:02:12.996810] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:21.068 [2024-07-25 13:02:13.031836] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:21.068 [2024-07-25 13:02:13.031922] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:21.068 [2024-07-25 13:02:13.031968] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:21.068 [2024-07-25 13:02:13.148887] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:21.068 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:06:21.068 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:21.068 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:06:21.068 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:21.068 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:21.068 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:21.068 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:21.068 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:21.068 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:21.068 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:21.068 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.068 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:21.068 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.068 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:21.068 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.068 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:21.068 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:21.068 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:21.326 [2024-07-25 13:02:13.301356] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:21.326 [2024-07-25 13:02:13.301474] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62347 ] 00:06:21.326 [2024-07-25 13:02:13.437584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.584 [2024-07-25 13:02:13.533121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.585 [2024-07-25 13:02:13.591516] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:21.585 [2024-07-25 13:02:13.625719] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:21.585 [2024-07-25 13:02:13.625793] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:21.585 [2024-07-25 13:02:13.625823] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:21.585 [2024-07-25 13:02:13.747454] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:21.843 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:06:21.843 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:21.843 ************************************ 00:06:21.843 END TEST dd_flag_directory_forced_aio 00:06:21.843 ************************************ 00:06:21.843 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:06:21.843 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:21.843 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:21.843 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:21.843 00:06:21.843 real 0m1.243s 00:06:21.843 user 0m0.718s 00:06:21.843 sys 0m0.313s 00:06:21.843 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.843 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:21.843 13:02:13 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:21.843 13:02:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.843 13:02:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.843 13:02:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:21.843 ************************************ 00:06:21.843 START TEST dd_flag_nofollow_forced_aio 00:06:21.843 ************************************ 00:06:21.843 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # nofollow 00:06:21.843 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:21.843 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:21.843 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:21.843 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:21.843 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:21.843 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:21.843 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:21.843 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:21.843 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.843 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:21.844 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.844 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:21.844 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.844 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:21.844 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:21.844 13:02:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:21.844 [2024-07-25 13:02:13.988316] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:21.844 [2024-07-25 13:02:13.988443] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62381 ] 00:06:22.102 [2024-07-25 13:02:14.127388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.102 [2024-07-25 13:02:14.230414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.102 [2024-07-25 13:02:14.287609] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:22.361 [2024-07-25 13:02:14.324276] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:22.361 [2024-07-25 13:02:14.324329] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:22.361 [2024-07-25 13:02:14.324343] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:22.361 [2024-07-25 13:02:14.445631] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:22.620 13:02:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:06:22.620 13:02:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:22.620 13:02:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:06:22.620 13:02:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:22.620 13:02:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:22.620 13:02:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:22.620 13:02:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:22.620 13:02:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:22.620 13:02:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:22.620 13:02:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:22.620 13:02:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.620 13:02:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:22.620 13:02:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.620 13:02:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:22.620 13:02:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.620 13:02:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:22.620 13:02:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:22.620 13:02:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:22.620 [2024-07-25 13:02:14.620785] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:22.620 [2024-07-25 13:02:14.620886] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62391 ] 00:06:22.620 [2024-07-25 13:02:14.757991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.879 [2024-07-25 13:02:14.840926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.879 [2024-07-25 13:02:14.896760] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:22.879 [2024-07-25 13:02:14.932067] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:22.879 [2024-07-25 13:02:14.932108] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:22.879 [2024-07-25 13:02:14.932140] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:22.879 [2024-07-25 13:02:15.051292] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:23.138 13:02:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:06:23.138 13:02:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:23.138 13:02:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:06:23.138 13:02:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:23.138 13:02:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:23.138 13:02:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:23.138 13:02:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:23.138 13:02:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:23.138 13:02:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:23.138 13:02:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:23.138 [2024-07-25 13:02:15.187814] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:23.138 [2024-07-25 13:02:15.187917] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62400 ] 00:06:23.138 [2024-07-25 13:02:15.323815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.396 [2024-07-25 13:02:15.413162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.396 [2024-07-25 13:02:15.468631] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:23.655  Copying: 512/512 [B] (average 500 kBps) 00:06:23.655 00:06:23.655 ************************************ 00:06:23.655 END TEST dd_flag_nofollow_forced_aio 00:06:23.655 ************************************ 00:06:23.655 13:02:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ u2f9ehshvxp8wkjz99rumnwkx87c6s4akcx2q10fn3f4xd4s6mgp44ct5wy31024zld5lu721rxkf9ano8r4s5trctojvrg1x68ddeqhmqq6k8aeqi9o7858qmrpu1oqjikj4dyayomjgylx9k2q8ma1rfaenwotgwgmpm12000m7u4tro927gou71uuvrocrec1nqnhgk8ok8u4yffsdkpcdacqfhkohtyamrq6grfcn35aw855c05se9u0ohirmw6edczg45312sby2ubqrw8qumr7ek8ybom055qvp2p0wx2mh3ifeejq8tdjwq6af0q996j4vx42h0qs21ce51kue80n4d13w801p80ff8gbsiynz0kx4e6o56nfj9jd7vain5elhsvq0lw5d211xizb6srbehrj9hls3w1rzcs5qnmi7nr6xde0doby4w330azles3euzrhlpb2txri9my18ctyuggymj5twhqknzkvtbjhq2z7erwhs29wfwh8 == \u\2\f\9\e\h\s\h\v\x\p\8\w\k\j\z\9\9\r\u\m\n\w\k\x\8\7\c\6\s\4\a\k\c\x\2\q\1\0\f\n\3\f\4\x\d\4\s\6\m\g\p\4\4\c\t\5\w\y\3\1\0\2\4\z\l\d\5\l\u\7\2\1\r\x\k\f\9\a\n\o\8\r\4\s\5\t\r\c\t\o\j\v\r\g\1\x\6\8\d\d\e\q\h\m\q\q\6\k\8\a\e\q\i\9\o\7\8\5\8\q\m\r\p\u\1\o\q\j\i\k\j\4\d\y\a\y\o\m\j\g\y\l\x\9\k\2\q\8\m\a\1\r\f\a\e\n\w\o\t\g\w\g\m\p\m\1\2\0\0\0\m\7\u\4\t\r\o\9\2\7\g\o\u\7\1\u\u\v\r\o\c\r\e\c\1\n\q\n\h\g\k\8\o\k\8\u\4\y\f\f\s\d\k\p\c\d\a\c\q\f\h\k\o\h\t\y\a\m\r\q\6\g\r\f\c\n\3\5\a\w\8\5\5\c\0\5\s\e\9\u\0\o\h\i\r\m\w\6\e\d\c\z\g\4\5\3\1\2\s\b\y\2\u\b\q\r\w\8\q\u\m\r\7\e\k\8\y\b\o\m\0\5\5\q\v\p\2\p\0\w\x\2\m\h\3\i\f\e\e\j\q\8\t\d\j\w\q\6\a\f\0\q\9\9\6\j\4\v\x\4\2\h\0\q\s\2\1\c\e\5\1\k\u\e\8\0\n\4\d\1\3\w\8\0\1\p\8\0\f\f\8\g\b\s\i\y\n\z\0\k\x\4\e\6\o\5\6\n\f\j\9\j\d\7\v\a\i\n\5\e\l\h\s\v\q\0\l\w\5\d\2\1\1\x\i\z\b\6\s\r\b\e\h\r\j\9\h\l\s\3\w\1\r\z\c\s\5\q\n\m\i\7\n\r\6\x\d\e\0\d\o\b\y\4\w\3\3\0\a\z\l\e\s\3\e\u\z\r\h\l\p\b\2\t\x\r\i\9\m\y\1\8\c\t\y\u\g\g\y\m\j\5\t\w\h\q\k\n\z\k\v\t\b\j\h\q\2\z\7\e\r\w\h\s\2\9\w\f\w\h\8 ]] 00:06:23.655 00:06:23.655 real 0m1.838s 00:06:23.655 user 0m1.053s 00:06:23.655 sys 0m0.453s 00:06:23.655 13:02:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.655 13:02:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:23.655 13:02:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:23.655 13:02:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.655 13:02:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.655 13:02:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:23.655 ************************************ 00:06:23.655 START TEST dd_flag_noatime_forced_aio 00:06:23.655 ************************************ 00:06:23.655 13:02:15 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # noatime 00:06:23.655 13:02:15 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:23.655 13:02:15 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:23.655 13:02:15 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:23.655 13:02:15 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:23.655 13:02:15 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:23.655 13:02:15 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:23.655 13:02:15 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721912535 00:06:23.655 13:02:15 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:23.655 13:02:15 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721912535 00:06:23.655 13:02:15 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:25.031 13:02:16 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:25.031 [2024-07-25 13:02:16.894160] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:25.031 [2024-07-25 13:02:16.895431] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62440 ] 00:06:25.031 [2024-07-25 13:02:17.040258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.031 [2024-07-25 13:02:17.153440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.031 [2024-07-25 13:02:17.211179] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:25.547  Copying: 512/512 [B] (average 500 kBps) 00:06:25.547 00:06:25.547 13:02:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:25.547 13:02:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721912535 )) 00:06:25.547 13:02:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:25.547 13:02:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721912535 )) 00:06:25.547 13:02:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:25.547 [2024-07-25 13:02:17.548281] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:25.547 [2024-07-25 13:02:17.548371] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62456 ] 00:06:25.547 [2024-07-25 13:02:17.685245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.806 [2024-07-25 13:02:17.766312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.806 [2024-07-25 13:02:17.822006] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:26.064  Copying: 512/512 [B] (average 500 kBps) 00:06:26.064 00:06:26.064 13:02:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:26.064 13:02:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721912537 )) 00:06:26.064 00:06:26.064 real 0m2.307s 00:06:26.064 user 0m0.722s 00:06:26.064 sys 0m0.330s 00:06:26.064 ************************************ 00:06:26.064 END TEST dd_flag_noatime_forced_aio 00:06:26.064 ************************************ 00:06:26.064 13:02:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.064 13:02:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:26.064 13:02:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:26.064 13:02:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.064 13:02:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.064 13:02:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:26.064 ************************************ 00:06:26.064 START TEST dd_flags_misc_forced_aio 00:06:26.064 ************************************ 00:06:26.064 13:02:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # io 00:06:26.064 13:02:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:26.064 13:02:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:26.064 13:02:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:26.064 13:02:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:26.064 13:02:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:26.064 13:02:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:26.064 13:02:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:26.064 13:02:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:26.064 13:02:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:26.064 [2024-07-25 13:02:18.232344] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:26.064 [2024-07-25 13:02:18.232478] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62484 ] 00:06:26.323 [2024-07-25 13:02:18.373584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.323 [2024-07-25 13:02:18.497530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.581 [2024-07-25 13:02:18.559537] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:26.839  Copying: 512/512 [B] (average 500 kBps) 00:06:26.839 00:06:26.839 13:02:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zlrfh5lhwvtohl71mryx0gb7832asnh9wyrrkctgbjb5apnmrwcrynygfqzly15nmul4qhokipisj4sa07w3xzucvswvztwbp07ejy8e13o7vtj7yfhgbhp5mgnm2c7osf4a8pcgso85luej19pry017cqct8t8lf3dfvmu75gxerq0o4j5ye3ftg7jmoh6zcxlbnwdl7pmh35443gb02dcro4kff1r0xhoosoi1qcaqrdn35xapao4io2z1kl9laaqpb80cuygbooeykmz0xtndy8dz485378kc99akv53d3f63smuowowu2k968yekzbhi892s0w3rvtjk04613vhc2q3153fu10bn6qxqsiw0r5njpa1e2rg486zzmbhgsgniw88dcrtuw3t78tt9tuw7mauydce418602iogq5yxi9aaepboid0f6b6qfq6rtwbfcjyo4kdqwr15ibh0i9zfrzoj4ud0oup9qfmiu889cexrf0vlpi7zrytn8phm == \z\l\r\f\h\5\l\h\w\v\t\o\h\l\7\1\m\r\y\x\0\g\b\7\8\3\2\a\s\n\h\9\w\y\r\r\k\c\t\g\b\j\b\5\a\p\n\m\r\w\c\r\y\n\y\g\f\q\z\l\y\1\5\n\m\u\l\4\q\h\o\k\i\p\i\s\j\4\s\a\0\7\w\3\x\z\u\c\v\s\w\v\z\t\w\b\p\0\7\e\j\y\8\e\1\3\o\7\v\t\j\7\y\f\h\g\b\h\p\5\m\g\n\m\2\c\7\o\s\f\4\a\8\p\c\g\s\o\8\5\l\u\e\j\1\9\p\r\y\0\1\7\c\q\c\t\8\t\8\l\f\3\d\f\v\m\u\7\5\g\x\e\r\q\0\o\4\j\5\y\e\3\f\t\g\7\j\m\o\h\6\z\c\x\l\b\n\w\d\l\7\p\m\h\3\5\4\4\3\g\b\0\2\d\c\r\o\4\k\f\f\1\r\0\x\h\o\o\s\o\i\1\q\c\a\q\r\d\n\3\5\x\a\p\a\o\4\i\o\2\z\1\k\l\9\l\a\a\q\p\b\8\0\c\u\y\g\b\o\o\e\y\k\m\z\0\x\t\n\d\y\8\d\z\4\8\5\3\7\8\k\c\9\9\a\k\v\5\3\d\3\f\6\3\s\m\u\o\w\o\w\u\2\k\9\6\8\y\e\k\z\b\h\i\8\9\2\s\0\w\3\r\v\t\j\k\0\4\6\1\3\v\h\c\2\q\3\1\5\3\f\u\1\0\b\n\6\q\x\q\s\i\w\0\r\5\n\j\p\a\1\e\2\r\g\4\8\6\z\z\m\b\h\g\s\g\n\i\w\8\8\d\c\r\t\u\w\3\t\7\8\t\t\9\t\u\w\7\m\a\u\y\d\c\e\4\1\8\6\0\2\i\o\g\q\5\y\x\i\9\a\a\e\p\b\o\i\d\0\f\6\b\6\q\f\q\6\r\t\w\b\f\c\j\y\o\4\k\d\q\w\r\1\5\i\b\h\0\i\9\z\f\r\z\o\j\4\u\d\0\o\u\p\9\q\f\m\i\u\8\8\9\c\e\x\r\f\0\v\l\p\i\7\z\r\y\t\n\8\p\h\m ]] 00:06:26.839 13:02:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:26.839 13:02:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:26.839 [2024-07-25 13:02:18.903464] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:26.839 [2024-07-25 13:02:18.903570] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62491 ] 00:06:27.097 [2024-07-25 13:02:19.038902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.097 [2024-07-25 13:02:19.126561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.097 [2024-07-25 13:02:19.179893] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:27.354  Copying: 512/512 [B] (average 500 kBps) 00:06:27.354 00:06:27.355 13:02:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zlrfh5lhwvtohl71mryx0gb7832asnh9wyrrkctgbjb5apnmrwcrynygfqzly15nmul4qhokipisj4sa07w3xzucvswvztwbp07ejy8e13o7vtj7yfhgbhp5mgnm2c7osf4a8pcgso85luej19pry017cqct8t8lf3dfvmu75gxerq0o4j5ye3ftg7jmoh6zcxlbnwdl7pmh35443gb02dcro4kff1r0xhoosoi1qcaqrdn35xapao4io2z1kl9laaqpb80cuygbooeykmz0xtndy8dz485378kc99akv53d3f63smuowowu2k968yekzbhi892s0w3rvtjk04613vhc2q3153fu10bn6qxqsiw0r5njpa1e2rg486zzmbhgsgniw88dcrtuw3t78tt9tuw7mauydce418602iogq5yxi9aaepboid0f6b6qfq6rtwbfcjyo4kdqwr15ibh0i9zfrzoj4ud0oup9qfmiu889cexrf0vlpi7zrytn8phm == \z\l\r\f\h\5\l\h\w\v\t\o\h\l\7\1\m\r\y\x\0\g\b\7\8\3\2\a\s\n\h\9\w\y\r\r\k\c\t\g\b\j\b\5\a\p\n\m\r\w\c\r\y\n\y\g\f\q\z\l\y\1\5\n\m\u\l\4\q\h\o\k\i\p\i\s\j\4\s\a\0\7\w\3\x\z\u\c\v\s\w\v\z\t\w\b\p\0\7\e\j\y\8\e\1\3\o\7\v\t\j\7\y\f\h\g\b\h\p\5\m\g\n\m\2\c\7\o\s\f\4\a\8\p\c\g\s\o\8\5\l\u\e\j\1\9\p\r\y\0\1\7\c\q\c\t\8\t\8\l\f\3\d\f\v\m\u\7\5\g\x\e\r\q\0\o\4\j\5\y\e\3\f\t\g\7\j\m\o\h\6\z\c\x\l\b\n\w\d\l\7\p\m\h\3\5\4\4\3\g\b\0\2\d\c\r\o\4\k\f\f\1\r\0\x\h\o\o\s\o\i\1\q\c\a\q\r\d\n\3\5\x\a\p\a\o\4\i\o\2\z\1\k\l\9\l\a\a\q\p\b\8\0\c\u\y\g\b\o\o\e\y\k\m\z\0\x\t\n\d\y\8\d\z\4\8\5\3\7\8\k\c\9\9\a\k\v\5\3\d\3\f\6\3\s\m\u\o\w\o\w\u\2\k\9\6\8\y\e\k\z\b\h\i\8\9\2\s\0\w\3\r\v\t\j\k\0\4\6\1\3\v\h\c\2\q\3\1\5\3\f\u\1\0\b\n\6\q\x\q\s\i\w\0\r\5\n\j\p\a\1\e\2\r\g\4\8\6\z\z\m\b\h\g\s\g\n\i\w\8\8\d\c\r\t\u\w\3\t\7\8\t\t\9\t\u\w\7\m\a\u\y\d\c\e\4\1\8\6\0\2\i\o\g\q\5\y\x\i\9\a\a\e\p\b\o\i\d\0\f\6\b\6\q\f\q\6\r\t\w\b\f\c\j\y\o\4\k\d\q\w\r\1\5\i\b\h\0\i\9\z\f\r\z\o\j\4\u\d\0\o\u\p\9\q\f\m\i\u\8\8\9\c\e\x\r\f\0\v\l\p\i\7\z\r\y\t\n\8\p\h\m ]] 00:06:27.355 13:02:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:27.355 13:02:19 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:27.355 [2024-07-25 13:02:19.495040] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:27.355 [2024-07-25 13:02:19.495150] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62499 ] 00:06:27.613 [2024-07-25 13:02:19.634221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.613 [2024-07-25 13:02:19.724113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.613 [2024-07-25 13:02:19.779636] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:27.871  Copying: 512/512 [B] (average 166 kBps) 00:06:27.871 00:06:27.871 13:02:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zlrfh5lhwvtohl71mryx0gb7832asnh9wyrrkctgbjb5apnmrwcrynygfqzly15nmul4qhokipisj4sa07w3xzucvswvztwbp07ejy8e13o7vtj7yfhgbhp5mgnm2c7osf4a8pcgso85luej19pry017cqct8t8lf3dfvmu75gxerq0o4j5ye3ftg7jmoh6zcxlbnwdl7pmh35443gb02dcro4kff1r0xhoosoi1qcaqrdn35xapao4io2z1kl9laaqpb80cuygbooeykmz0xtndy8dz485378kc99akv53d3f63smuowowu2k968yekzbhi892s0w3rvtjk04613vhc2q3153fu10bn6qxqsiw0r5njpa1e2rg486zzmbhgsgniw88dcrtuw3t78tt9tuw7mauydce418602iogq5yxi9aaepboid0f6b6qfq6rtwbfcjyo4kdqwr15ibh0i9zfrzoj4ud0oup9qfmiu889cexrf0vlpi7zrytn8phm == \z\l\r\f\h\5\l\h\w\v\t\o\h\l\7\1\m\r\y\x\0\g\b\7\8\3\2\a\s\n\h\9\w\y\r\r\k\c\t\g\b\j\b\5\a\p\n\m\r\w\c\r\y\n\y\g\f\q\z\l\y\1\5\n\m\u\l\4\q\h\o\k\i\p\i\s\j\4\s\a\0\7\w\3\x\z\u\c\v\s\w\v\z\t\w\b\p\0\7\e\j\y\8\e\1\3\o\7\v\t\j\7\y\f\h\g\b\h\p\5\m\g\n\m\2\c\7\o\s\f\4\a\8\p\c\g\s\o\8\5\l\u\e\j\1\9\p\r\y\0\1\7\c\q\c\t\8\t\8\l\f\3\d\f\v\m\u\7\5\g\x\e\r\q\0\o\4\j\5\y\e\3\f\t\g\7\j\m\o\h\6\z\c\x\l\b\n\w\d\l\7\p\m\h\3\5\4\4\3\g\b\0\2\d\c\r\o\4\k\f\f\1\r\0\x\h\o\o\s\o\i\1\q\c\a\q\r\d\n\3\5\x\a\p\a\o\4\i\o\2\z\1\k\l\9\l\a\a\q\p\b\8\0\c\u\y\g\b\o\o\e\y\k\m\z\0\x\t\n\d\y\8\d\z\4\8\5\3\7\8\k\c\9\9\a\k\v\5\3\d\3\f\6\3\s\m\u\o\w\o\w\u\2\k\9\6\8\y\e\k\z\b\h\i\8\9\2\s\0\w\3\r\v\t\j\k\0\4\6\1\3\v\h\c\2\q\3\1\5\3\f\u\1\0\b\n\6\q\x\q\s\i\w\0\r\5\n\j\p\a\1\e\2\r\g\4\8\6\z\z\m\b\h\g\s\g\n\i\w\8\8\d\c\r\t\u\w\3\t\7\8\t\t\9\t\u\w\7\m\a\u\y\d\c\e\4\1\8\6\0\2\i\o\g\q\5\y\x\i\9\a\a\e\p\b\o\i\d\0\f\6\b\6\q\f\q\6\r\t\w\b\f\c\j\y\o\4\k\d\q\w\r\1\5\i\b\h\0\i\9\z\f\r\z\o\j\4\u\d\0\o\u\p\9\q\f\m\i\u\8\8\9\c\e\x\r\f\0\v\l\p\i\7\z\r\y\t\n\8\p\h\m ]] 00:06:27.871 13:02:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:27.871 13:02:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:28.129 [2024-07-25 13:02:20.110424] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:28.129 [2024-07-25 13:02:20.110513] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62512 ] 00:06:28.129 [2024-07-25 13:02:20.244143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.387 [2024-07-25 13:02:20.330923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.387 [2024-07-25 13:02:20.383833] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:28.646  Copying: 512/512 [B] (average 500 kBps) 00:06:28.646 00:06:28.646 13:02:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zlrfh5lhwvtohl71mryx0gb7832asnh9wyrrkctgbjb5apnmrwcrynygfqzly15nmul4qhokipisj4sa07w3xzucvswvztwbp07ejy8e13o7vtj7yfhgbhp5mgnm2c7osf4a8pcgso85luej19pry017cqct8t8lf3dfvmu75gxerq0o4j5ye3ftg7jmoh6zcxlbnwdl7pmh35443gb02dcro4kff1r0xhoosoi1qcaqrdn35xapao4io2z1kl9laaqpb80cuygbooeykmz0xtndy8dz485378kc99akv53d3f63smuowowu2k968yekzbhi892s0w3rvtjk04613vhc2q3153fu10bn6qxqsiw0r5njpa1e2rg486zzmbhgsgniw88dcrtuw3t78tt9tuw7mauydce418602iogq5yxi9aaepboid0f6b6qfq6rtwbfcjyo4kdqwr15ibh0i9zfrzoj4ud0oup9qfmiu889cexrf0vlpi7zrytn8phm == \z\l\r\f\h\5\l\h\w\v\t\o\h\l\7\1\m\r\y\x\0\g\b\7\8\3\2\a\s\n\h\9\w\y\r\r\k\c\t\g\b\j\b\5\a\p\n\m\r\w\c\r\y\n\y\g\f\q\z\l\y\1\5\n\m\u\l\4\q\h\o\k\i\p\i\s\j\4\s\a\0\7\w\3\x\z\u\c\v\s\w\v\z\t\w\b\p\0\7\e\j\y\8\e\1\3\o\7\v\t\j\7\y\f\h\g\b\h\p\5\m\g\n\m\2\c\7\o\s\f\4\a\8\p\c\g\s\o\8\5\l\u\e\j\1\9\p\r\y\0\1\7\c\q\c\t\8\t\8\l\f\3\d\f\v\m\u\7\5\g\x\e\r\q\0\o\4\j\5\y\e\3\f\t\g\7\j\m\o\h\6\z\c\x\l\b\n\w\d\l\7\p\m\h\3\5\4\4\3\g\b\0\2\d\c\r\o\4\k\f\f\1\r\0\x\h\o\o\s\o\i\1\q\c\a\q\r\d\n\3\5\x\a\p\a\o\4\i\o\2\z\1\k\l\9\l\a\a\q\p\b\8\0\c\u\y\g\b\o\o\e\y\k\m\z\0\x\t\n\d\y\8\d\z\4\8\5\3\7\8\k\c\9\9\a\k\v\5\3\d\3\f\6\3\s\m\u\o\w\o\w\u\2\k\9\6\8\y\e\k\z\b\h\i\8\9\2\s\0\w\3\r\v\t\j\k\0\4\6\1\3\v\h\c\2\q\3\1\5\3\f\u\1\0\b\n\6\q\x\q\s\i\w\0\r\5\n\j\p\a\1\e\2\r\g\4\8\6\z\z\m\b\h\g\s\g\n\i\w\8\8\d\c\r\t\u\w\3\t\7\8\t\t\9\t\u\w\7\m\a\u\y\d\c\e\4\1\8\6\0\2\i\o\g\q\5\y\x\i\9\a\a\e\p\b\o\i\d\0\f\6\b\6\q\f\q\6\r\t\w\b\f\c\j\y\o\4\k\d\q\w\r\1\5\i\b\h\0\i\9\z\f\r\z\o\j\4\u\d\0\o\u\p\9\q\f\m\i\u\8\8\9\c\e\x\r\f\0\v\l\p\i\7\z\r\y\t\n\8\p\h\m ]] 00:06:28.646 13:02:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:28.646 13:02:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:28.646 13:02:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:28.646 13:02:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:28.646 13:02:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:28.646 13:02:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:28.646 [2024-07-25 13:02:20.731310] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:28.646 [2024-07-25 13:02:20.731405] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62519 ] 00:06:28.904 [2024-07-25 13:02:20.868232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.904 [2024-07-25 13:02:20.966638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.904 [2024-07-25 13:02:21.022192] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:29.162  Copying: 512/512 [B] (average 500 kBps) 00:06:29.162 00:06:29.162 13:02:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ klrstr7srrzkowemjlcetuv9rg83vlgt9bbc2hir4fhcg5hz27iogdi74xdxybbmg0txeu3bk5ij7t8g2yzm07kdxwjm5smi34gvrr7ktmw48aofe03kjrykdftu1duvvumqcsoay1q5erz4rd0wbpuoi36y90rr5axkoluwhwdf4qheaswgz4wh4iz5ul4ba082x0lrttgwc4irm2qpf6r3dr4tvz1lec6x3a0x43fb2rg9p3rmkslaem72w50d1kh6ckvqzsz1k8iqybc7afurvku7k9tqkd1w3oo6x5eohk0d7iulmlniqisb07uvxljr8nwgincdok89k7u6y7k7cot1yi7xh0xmges10g9pd8u4441lgtxyaoe63us2o0m6eaoo6myrexfuq622h274rb2ofn3y9mjje7yyh8bmj36zh096ay8x5qy8xnv90gf36e8cr89cjifgzmv6vl7hgp532f44jeu4atmm9cz6hvd3ma8i0dnvhf1nwnek == \k\l\r\s\t\r\7\s\r\r\z\k\o\w\e\m\j\l\c\e\t\u\v\9\r\g\8\3\v\l\g\t\9\b\b\c\2\h\i\r\4\f\h\c\g\5\h\z\2\7\i\o\g\d\i\7\4\x\d\x\y\b\b\m\g\0\t\x\e\u\3\b\k\5\i\j\7\t\8\g\2\y\z\m\0\7\k\d\x\w\j\m\5\s\m\i\3\4\g\v\r\r\7\k\t\m\w\4\8\a\o\f\e\0\3\k\j\r\y\k\d\f\t\u\1\d\u\v\v\u\m\q\c\s\o\a\y\1\q\5\e\r\z\4\r\d\0\w\b\p\u\o\i\3\6\y\9\0\r\r\5\a\x\k\o\l\u\w\h\w\d\f\4\q\h\e\a\s\w\g\z\4\w\h\4\i\z\5\u\l\4\b\a\0\8\2\x\0\l\r\t\t\g\w\c\4\i\r\m\2\q\p\f\6\r\3\d\r\4\t\v\z\1\l\e\c\6\x\3\a\0\x\4\3\f\b\2\r\g\9\p\3\r\m\k\s\l\a\e\m\7\2\w\5\0\d\1\k\h\6\c\k\v\q\z\s\z\1\k\8\i\q\y\b\c\7\a\f\u\r\v\k\u\7\k\9\t\q\k\d\1\w\3\o\o\6\x\5\e\o\h\k\0\d\7\i\u\l\m\l\n\i\q\i\s\b\0\7\u\v\x\l\j\r\8\n\w\g\i\n\c\d\o\k\8\9\k\7\u\6\y\7\k\7\c\o\t\1\y\i\7\x\h\0\x\m\g\e\s\1\0\g\9\p\d\8\u\4\4\4\1\l\g\t\x\y\a\o\e\6\3\u\s\2\o\0\m\6\e\a\o\o\6\m\y\r\e\x\f\u\q\6\2\2\h\2\7\4\r\b\2\o\f\n\3\y\9\m\j\j\e\7\y\y\h\8\b\m\j\3\6\z\h\0\9\6\a\y\8\x\5\q\y\8\x\n\v\9\0\g\f\3\6\e\8\c\r\8\9\c\j\i\f\g\z\m\v\6\v\l\7\h\g\p\5\3\2\f\4\4\j\e\u\4\a\t\m\m\9\c\z\6\h\v\d\3\m\a\8\i\0\d\n\v\h\f\1\n\w\n\e\k ]] 00:06:29.162 13:02:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:29.162 13:02:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:29.420 [2024-07-25 13:02:21.360540] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:29.420 [2024-07-25 13:02:21.360684] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62527 ] 00:06:29.420 [2024-07-25 13:02:21.497185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.420 [2024-07-25 13:02:21.581973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.678 [2024-07-25 13:02:21.638376] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:29.936  Copying: 512/512 [B] (average 500 kBps) 00:06:29.936 00:06:29.936 13:02:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ klrstr7srrzkowemjlcetuv9rg83vlgt9bbc2hir4fhcg5hz27iogdi74xdxybbmg0txeu3bk5ij7t8g2yzm07kdxwjm5smi34gvrr7ktmw48aofe03kjrykdftu1duvvumqcsoay1q5erz4rd0wbpuoi36y90rr5axkoluwhwdf4qheaswgz4wh4iz5ul4ba082x0lrttgwc4irm2qpf6r3dr4tvz1lec6x3a0x43fb2rg9p3rmkslaem72w50d1kh6ckvqzsz1k8iqybc7afurvku7k9tqkd1w3oo6x5eohk0d7iulmlniqisb07uvxljr8nwgincdok89k7u6y7k7cot1yi7xh0xmges10g9pd8u4441lgtxyaoe63us2o0m6eaoo6myrexfuq622h274rb2ofn3y9mjje7yyh8bmj36zh096ay8x5qy8xnv90gf36e8cr89cjifgzmv6vl7hgp532f44jeu4atmm9cz6hvd3ma8i0dnvhf1nwnek == \k\l\r\s\t\r\7\s\r\r\z\k\o\w\e\m\j\l\c\e\t\u\v\9\r\g\8\3\v\l\g\t\9\b\b\c\2\h\i\r\4\f\h\c\g\5\h\z\2\7\i\o\g\d\i\7\4\x\d\x\y\b\b\m\g\0\t\x\e\u\3\b\k\5\i\j\7\t\8\g\2\y\z\m\0\7\k\d\x\w\j\m\5\s\m\i\3\4\g\v\r\r\7\k\t\m\w\4\8\a\o\f\e\0\3\k\j\r\y\k\d\f\t\u\1\d\u\v\v\u\m\q\c\s\o\a\y\1\q\5\e\r\z\4\r\d\0\w\b\p\u\o\i\3\6\y\9\0\r\r\5\a\x\k\o\l\u\w\h\w\d\f\4\q\h\e\a\s\w\g\z\4\w\h\4\i\z\5\u\l\4\b\a\0\8\2\x\0\l\r\t\t\g\w\c\4\i\r\m\2\q\p\f\6\r\3\d\r\4\t\v\z\1\l\e\c\6\x\3\a\0\x\4\3\f\b\2\r\g\9\p\3\r\m\k\s\l\a\e\m\7\2\w\5\0\d\1\k\h\6\c\k\v\q\z\s\z\1\k\8\i\q\y\b\c\7\a\f\u\r\v\k\u\7\k\9\t\q\k\d\1\w\3\o\o\6\x\5\e\o\h\k\0\d\7\i\u\l\m\l\n\i\q\i\s\b\0\7\u\v\x\l\j\r\8\n\w\g\i\n\c\d\o\k\8\9\k\7\u\6\y\7\k\7\c\o\t\1\y\i\7\x\h\0\x\m\g\e\s\1\0\g\9\p\d\8\u\4\4\4\1\l\g\t\x\y\a\o\e\6\3\u\s\2\o\0\m\6\e\a\o\o\6\m\y\r\e\x\f\u\q\6\2\2\h\2\7\4\r\b\2\o\f\n\3\y\9\m\j\j\e\7\y\y\h\8\b\m\j\3\6\z\h\0\9\6\a\y\8\x\5\q\y\8\x\n\v\9\0\g\f\3\6\e\8\c\r\8\9\c\j\i\f\g\z\m\v\6\v\l\7\h\g\p\5\3\2\f\4\4\j\e\u\4\a\t\m\m\9\c\z\6\h\v\d\3\m\a\8\i\0\d\n\v\h\f\1\n\w\n\e\k ]] 00:06:29.936 13:02:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:29.936 13:02:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:29.936 [2024-07-25 13:02:21.981345] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:29.936 [2024-07-25 13:02:21.981468] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62534 ] 00:06:29.936 [2024-07-25 13:02:22.114978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.193 [2024-07-25 13:02:22.223137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.194 [2024-07-25 13:02:22.281746] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:30.451  Copying: 512/512 [B] (average 125 kBps) 00:06:30.451 00:06:30.451 13:02:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ klrstr7srrzkowemjlcetuv9rg83vlgt9bbc2hir4fhcg5hz27iogdi74xdxybbmg0txeu3bk5ij7t8g2yzm07kdxwjm5smi34gvrr7ktmw48aofe03kjrykdftu1duvvumqcsoay1q5erz4rd0wbpuoi36y90rr5axkoluwhwdf4qheaswgz4wh4iz5ul4ba082x0lrttgwc4irm2qpf6r3dr4tvz1lec6x3a0x43fb2rg9p3rmkslaem72w50d1kh6ckvqzsz1k8iqybc7afurvku7k9tqkd1w3oo6x5eohk0d7iulmlniqisb07uvxljr8nwgincdok89k7u6y7k7cot1yi7xh0xmges10g9pd8u4441lgtxyaoe63us2o0m6eaoo6myrexfuq622h274rb2ofn3y9mjje7yyh8bmj36zh096ay8x5qy8xnv90gf36e8cr89cjifgzmv6vl7hgp532f44jeu4atmm9cz6hvd3ma8i0dnvhf1nwnek == \k\l\r\s\t\r\7\s\r\r\z\k\o\w\e\m\j\l\c\e\t\u\v\9\r\g\8\3\v\l\g\t\9\b\b\c\2\h\i\r\4\f\h\c\g\5\h\z\2\7\i\o\g\d\i\7\4\x\d\x\y\b\b\m\g\0\t\x\e\u\3\b\k\5\i\j\7\t\8\g\2\y\z\m\0\7\k\d\x\w\j\m\5\s\m\i\3\4\g\v\r\r\7\k\t\m\w\4\8\a\o\f\e\0\3\k\j\r\y\k\d\f\t\u\1\d\u\v\v\u\m\q\c\s\o\a\y\1\q\5\e\r\z\4\r\d\0\w\b\p\u\o\i\3\6\y\9\0\r\r\5\a\x\k\o\l\u\w\h\w\d\f\4\q\h\e\a\s\w\g\z\4\w\h\4\i\z\5\u\l\4\b\a\0\8\2\x\0\l\r\t\t\g\w\c\4\i\r\m\2\q\p\f\6\r\3\d\r\4\t\v\z\1\l\e\c\6\x\3\a\0\x\4\3\f\b\2\r\g\9\p\3\r\m\k\s\l\a\e\m\7\2\w\5\0\d\1\k\h\6\c\k\v\q\z\s\z\1\k\8\i\q\y\b\c\7\a\f\u\r\v\k\u\7\k\9\t\q\k\d\1\w\3\o\o\6\x\5\e\o\h\k\0\d\7\i\u\l\m\l\n\i\q\i\s\b\0\7\u\v\x\l\j\r\8\n\w\g\i\n\c\d\o\k\8\9\k\7\u\6\y\7\k\7\c\o\t\1\y\i\7\x\h\0\x\m\g\e\s\1\0\g\9\p\d\8\u\4\4\4\1\l\g\t\x\y\a\o\e\6\3\u\s\2\o\0\m\6\e\a\o\o\6\m\y\r\e\x\f\u\q\6\2\2\h\2\7\4\r\b\2\o\f\n\3\y\9\m\j\j\e\7\y\y\h\8\b\m\j\3\6\z\h\0\9\6\a\y\8\x\5\q\y\8\x\n\v\9\0\g\f\3\6\e\8\c\r\8\9\c\j\i\f\g\z\m\v\6\v\l\7\h\g\p\5\3\2\f\4\4\j\e\u\4\a\t\m\m\9\c\z\6\h\v\d\3\m\a\8\i\0\d\n\v\h\f\1\n\w\n\e\k ]] 00:06:30.451 13:02:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:30.451 13:02:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:30.451 [2024-07-25 13:02:22.628777] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:30.451 [2024-07-25 13:02:22.628891] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62542 ] 00:06:30.708 [2024-07-25 13:02:22.765186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.708 [2024-07-25 13:02:22.863199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.966 [2024-07-25 13:02:22.917120] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:31.224  Copying: 512/512 [B] (average 250 kBps) 00:06:31.224 00:06:31.224 13:02:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ klrstr7srrzkowemjlcetuv9rg83vlgt9bbc2hir4fhcg5hz27iogdi74xdxybbmg0txeu3bk5ij7t8g2yzm07kdxwjm5smi34gvrr7ktmw48aofe03kjrykdftu1duvvumqcsoay1q5erz4rd0wbpuoi36y90rr5axkoluwhwdf4qheaswgz4wh4iz5ul4ba082x0lrttgwc4irm2qpf6r3dr4tvz1lec6x3a0x43fb2rg9p3rmkslaem72w50d1kh6ckvqzsz1k8iqybc7afurvku7k9tqkd1w3oo6x5eohk0d7iulmlniqisb07uvxljr8nwgincdok89k7u6y7k7cot1yi7xh0xmges10g9pd8u4441lgtxyaoe63us2o0m6eaoo6myrexfuq622h274rb2ofn3y9mjje7yyh8bmj36zh096ay8x5qy8xnv90gf36e8cr89cjifgzmv6vl7hgp532f44jeu4atmm9cz6hvd3ma8i0dnvhf1nwnek == \k\l\r\s\t\r\7\s\r\r\z\k\o\w\e\m\j\l\c\e\t\u\v\9\r\g\8\3\v\l\g\t\9\b\b\c\2\h\i\r\4\f\h\c\g\5\h\z\2\7\i\o\g\d\i\7\4\x\d\x\y\b\b\m\g\0\t\x\e\u\3\b\k\5\i\j\7\t\8\g\2\y\z\m\0\7\k\d\x\w\j\m\5\s\m\i\3\4\g\v\r\r\7\k\t\m\w\4\8\a\o\f\e\0\3\k\j\r\y\k\d\f\t\u\1\d\u\v\v\u\m\q\c\s\o\a\y\1\q\5\e\r\z\4\r\d\0\w\b\p\u\o\i\3\6\y\9\0\r\r\5\a\x\k\o\l\u\w\h\w\d\f\4\q\h\e\a\s\w\g\z\4\w\h\4\i\z\5\u\l\4\b\a\0\8\2\x\0\l\r\t\t\g\w\c\4\i\r\m\2\q\p\f\6\r\3\d\r\4\t\v\z\1\l\e\c\6\x\3\a\0\x\4\3\f\b\2\r\g\9\p\3\r\m\k\s\l\a\e\m\7\2\w\5\0\d\1\k\h\6\c\k\v\q\z\s\z\1\k\8\i\q\y\b\c\7\a\f\u\r\v\k\u\7\k\9\t\q\k\d\1\w\3\o\o\6\x\5\e\o\h\k\0\d\7\i\u\l\m\l\n\i\q\i\s\b\0\7\u\v\x\l\j\r\8\n\w\g\i\n\c\d\o\k\8\9\k\7\u\6\y\7\k\7\c\o\t\1\y\i\7\x\h\0\x\m\g\e\s\1\0\g\9\p\d\8\u\4\4\4\1\l\g\t\x\y\a\o\e\6\3\u\s\2\o\0\m\6\e\a\o\o\6\m\y\r\e\x\f\u\q\6\2\2\h\2\7\4\r\b\2\o\f\n\3\y\9\m\j\j\e\7\y\y\h\8\b\m\j\3\6\z\h\0\9\6\a\y\8\x\5\q\y\8\x\n\v\9\0\g\f\3\6\e\8\c\r\8\9\c\j\i\f\g\z\m\v\6\v\l\7\h\g\p\5\3\2\f\4\4\j\e\u\4\a\t\m\m\9\c\z\6\h\v\d\3\m\a\8\i\0\d\n\v\h\f\1\n\w\n\e\k ]] 00:06:31.224 00:06:31.224 real 0m5.024s 00:06:31.224 user 0m2.846s 00:06:31.224 sys 0m1.208s 00:06:31.224 ************************************ 00:06:31.224 END TEST dd_flags_misc_forced_aio 00:06:31.224 ************************************ 00:06:31.224 13:02:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.224 13:02:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:31.224 13:02:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:31.224 13:02:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:31.224 13:02:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:31.224 ************************************ 00:06:31.224 END TEST spdk_dd_posix 00:06:31.224 ************************************ 00:06:31.224 00:06:31.224 real 0m22.630s 00:06:31.224 user 0m11.678s 00:06:31.224 sys 0m6.926s 00:06:31.224 13:02:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.224 13:02:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:31.224 13:02:23 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:31.224 13:02:23 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.224 13:02:23 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.224 13:02:23 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:31.224 ************************************ 00:06:31.224 START TEST spdk_dd_malloc 00:06:31.224 ************************************ 00:06:31.224 13:02:23 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:31.224 * Looking for test storage... 00:06:31.224 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:31.225 13:02:23 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:31.225 13:02:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.225 13:02:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.225 13:02:23 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.225 13:02:23 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.225 13:02:23 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.225 13:02:23 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.225 13:02:23 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:31.225 13:02:23 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.225 13:02:23 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:31.225 13:02:23 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.225 13:02:23 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.225 13:02:23 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:31.225 ************************************ 00:06:31.225 START TEST dd_malloc_copy 00:06:31.225 ************************************ 00:06:31.225 13:02:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # malloc_copy 00:06:31.225 13:02:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:31.225 13:02:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:31.225 13:02:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:31.225 13:02:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:31.225 13:02:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:31.225 13:02:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:31.225 13:02:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:31.225 13:02:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:31.225 13:02:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:31.225 13:02:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:31.482 { 00:06:31.482 "subsystems": [ 00:06:31.482 { 00:06:31.482 "subsystem": "bdev", 00:06:31.482 "config": [ 00:06:31.482 { 00:06:31.482 "params": { 00:06:31.482 "block_size": 512, 00:06:31.482 "num_blocks": 1048576, 00:06:31.482 "name": "malloc0" 00:06:31.482 }, 00:06:31.482 "method": "bdev_malloc_create" 00:06:31.482 }, 00:06:31.482 { 00:06:31.482 "params": { 00:06:31.482 "block_size": 512, 00:06:31.482 "num_blocks": 1048576, 00:06:31.482 "name": "malloc1" 00:06:31.482 }, 00:06:31.482 "method": "bdev_malloc_create" 00:06:31.482 }, 00:06:31.482 { 00:06:31.482 "method": "bdev_wait_for_examine" 00:06:31.482 } 00:06:31.482 ] 00:06:31.482 } 00:06:31.482 ] 00:06:31.482 } 00:06:31.482 [2024-07-25 13:02:23.464561] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:31.482 [2024-07-25 13:02:23.464698] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62616 ] 00:06:31.482 [2024-07-25 13:02:23.614745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.740 [2024-07-25 13:02:23.724806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.740 [2024-07-25 13:02:23.782457] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:35.265  Copying: 218/512 [MB] (218 MBps) Copying: 435/512 [MB] (217 MBps) Copying: 512/512 [MB] (average 217 MBps) 00:06:35.265 00:06:35.265 13:02:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:35.265 13:02:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:06:35.265 13:02:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:35.265 13:02:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:35.265 [2024-07-25 13:02:27.169616] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:35.265 [2024-07-25 13:02:27.169726] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62663 ] 00:06:35.265 { 00:06:35.265 "subsystems": [ 00:06:35.265 { 00:06:35.265 "subsystem": "bdev", 00:06:35.265 "config": [ 00:06:35.265 { 00:06:35.265 "params": { 00:06:35.265 "block_size": 512, 00:06:35.265 "num_blocks": 1048576, 00:06:35.265 "name": "malloc0" 00:06:35.265 }, 00:06:35.265 "method": "bdev_malloc_create" 00:06:35.265 }, 00:06:35.265 { 00:06:35.265 "params": { 00:06:35.265 "block_size": 512, 00:06:35.265 "num_blocks": 1048576, 00:06:35.265 "name": "malloc1" 00:06:35.265 }, 00:06:35.266 "method": "bdev_malloc_create" 00:06:35.266 }, 00:06:35.266 { 00:06:35.266 "method": "bdev_wait_for_examine" 00:06:35.266 } 00:06:35.266 ] 00:06:35.266 } 00:06:35.266 ] 00:06:35.266 } 00:06:35.266 [2024-07-25 13:02:27.312597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.266 [2024-07-25 13:02:27.423045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.523 [2024-07-25 13:02:27.481723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:39.025  Copying: 215/512 [MB] (215 MBps) Copying: 427/512 [MB] (211 MBps) Copying: 512/512 [MB] (average 214 MBps) 00:06:39.025 00:06:39.025 00:06:39.025 real 0m7.468s 00:06:39.025 user 0m6.372s 00:06:39.025 sys 0m0.920s 00:06:39.025 13:02:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.025 13:02:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:39.025 ************************************ 00:06:39.025 END TEST dd_malloc_copy 00:06:39.025 ************************************ 00:06:39.025 ************************************ 00:06:39.025 END TEST spdk_dd_malloc 00:06:39.025 ************************************ 00:06:39.025 00:06:39.025 real 0m7.603s 00:06:39.025 user 0m6.434s 00:06:39.025 sys 0m0.993s 00:06:39.025 13:02:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.025 13:02:30 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:39.025 13:02:30 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:39.025 13:02:30 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:39.025 13:02:30 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.025 13:02:30 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:39.025 ************************************ 00:06:39.025 START TEST spdk_dd_bdev_to_bdev 00:06:39.025 ************************************ 00:06:39.025 13:02:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:39.025 * Looking for test storage... 00:06:39.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:39.025 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:39.025 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:39.025 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:39.025 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:39.025 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.025 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.025 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.025 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:06:39.025 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.025 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:39.025 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:39.025 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:39.025 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:39.025 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:39.025 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:39.025 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:06:39.025 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:39.025 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:39.025 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:06:39.025 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:39.025 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:39.025 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:06:39.025 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:39.025 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:39.025 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:39.025 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:39.025 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:39.026 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:39.026 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:39.026 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.026 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:39.026 ************************************ 00:06:39.026 START TEST dd_inflate_file 00:06:39.026 ************************************ 00:06:39.026 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:39.026 [2024-07-25 13:02:31.098912] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:39.026 [2024-07-25 13:02:31.099036] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62768 ] 00:06:39.284 [2024-07-25 13:02:31.237778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.284 [2024-07-25 13:02:31.339669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.284 [2024-07-25 13:02:31.393032] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:39.542  Copying: 64/64 [MB] (average 1560 MBps) 00:06:39.542 00:06:39.542 00:06:39.542 real 0m0.623s 00:06:39.542 user 0m0.363s 00:06:39.542 sys 0m0.313s 00:06:39.542 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.542 ************************************ 00:06:39.542 END TEST dd_inflate_file 00:06:39.542 ************************************ 00:06:39.542 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:06:39.542 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:39.542 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:39.542 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:39.542 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:39.542 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:39.542 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:39.542 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.542 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:39.542 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:39.542 ************************************ 00:06:39.542 START TEST dd_copy_to_out_bdev 00:06:39.542 ************************************ 00:06:39.542 13:02:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:39.800 { 00:06:39.800 "subsystems": [ 00:06:39.800 { 00:06:39.800 "subsystem": "bdev", 00:06:39.800 "config": [ 00:06:39.800 { 00:06:39.800 "params": { 00:06:39.800 "trtype": "pcie", 00:06:39.800 "traddr": "0000:00:10.0", 00:06:39.800 "name": "Nvme0" 00:06:39.800 }, 00:06:39.800 "method": "bdev_nvme_attach_controller" 00:06:39.800 }, 00:06:39.800 { 00:06:39.800 "params": { 00:06:39.800 "trtype": "pcie", 00:06:39.800 "traddr": "0000:00:11.0", 00:06:39.800 "name": "Nvme1" 00:06:39.800 }, 00:06:39.800 "method": "bdev_nvme_attach_controller" 00:06:39.800 }, 00:06:39.800 { 00:06:39.800 "method": "bdev_wait_for_examine" 00:06:39.800 } 00:06:39.800 ] 00:06:39.800 } 00:06:39.800 ] 00:06:39.800 } 00:06:39.800 [2024-07-25 13:02:31.783550] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:39.800 [2024-07-25 13:02:31.783657] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62807 ] 00:06:39.800 [2024-07-25 13:02:31.921868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.059 [2024-07-25 13:02:32.020830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.059 [2024-07-25 13:02:32.074924] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:41.694  Copying: 50/64 [MB] (50 MBps) Copying: 64/64 [MB] (average 51 MBps) 00:06:41.694 00:06:41.694 00:06:41.694 real 0m2.011s 00:06:41.694 user 0m1.780s 00:06:41.694 sys 0m1.591s 00:06:41.694 13:02:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.694 13:02:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:41.694 ************************************ 00:06:41.694 END TEST dd_copy_to_out_bdev 00:06:41.694 ************************************ 00:06:41.694 13:02:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:06:41.694 13:02:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:06:41.694 13:02:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.694 13:02:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.694 13:02:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:41.694 ************************************ 00:06:41.694 START TEST dd_offset_magic 00:06:41.694 ************************************ 00:06:41.694 13:02:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # offset_magic 00:06:41.694 13:02:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:06:41.694 13:02:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:06:41.694 13:02:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:06:41.694 13:02:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:41.694 13:02:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:06:41.694 13:02:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:41.694 13:02:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:41.694 13:02:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:41.694 [2024-07-25 13:02:33.839604] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:41.694 [2024-07-25 13:02:33.839692] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62852 ] 00:06:41.694 { 00:06:41.694 "subsystems": [ 00:06:41.694 { 00:06:41.694 "subsystem": "bdev", 00:06:41.694 "config": [ 00:06:41.694 { 00:06:41.694 "params": { 00:06:41.694 "trtype": "pcie", 00:06:41.694 "traddr": "0000:00:10.0", 00:06:41.694 "name": "Nvme0" 00:06:41.694 }, 00:06:41.694 "method": "bdev_nvme_attach_controller" 00:06:41.694 }, 00:06:41.694 { 00:06:41.694 "params": { 00:06:41.694 "trtype": "pcie", 00:06:41.694 "traddr": "0000:00:11.0", 00:06:41.694 "name": "Nvme1" 00:06:41.694 }, 00:06:41.694 "method": "bdev_nvme_attach_controller" 00:06:41.694 }, 00:06:41.694 { 00:06:41.694 "method": "bdev_wait_for_examine" 00:06:41.694 } 00:06:41.694 ] 00:06:41.694 } 00:06:41.694 ] 00:06:41.694 } 00:06:41.953 [2024-07-25 13:02:33.971532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.953 [2024-07-25 13:02:34.068887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.953 [2024-07-25 13:02:34.122099] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:42.469  Copying: 65/65 [MB] (average 890 MBps) 00:06:42.469 00:06:42.469 13:02:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:06:42.469 13:02:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:42.469 13:02:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:42.469 13:02:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:42.727 [2024-07-25 13:02:34.669983] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:42.727 [2024-07-25 13:02:34.670098] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62872 ] 00:06:42.727 { 00:06:42.727 "subsystems": [ 00:06:42.727 { 00:06:42.727 "subsystem": "bdev", 00:06:42.727 "config": [ 00:06:42.727 { 00:06:42.727 "params": { 00:06:42.727 "trtype": "pcie", 00:06:42.727 "traddr": "0000:00:10.0", 00:06:42.727 "name": "Nvme0" 00:06:42.727 }, 00:06:42.727 "method": "bdev_nvme_attach_controller" 00:06:42.727 }, 00:06:42.727 { 00:06:42.727 "params": { 00:06:42.727 "trtype": "pcie", 00:06:42.727 "traddr": "0000:00:11.0", 00:06:42.727 "name": "Nvme1" 00:06:42.727 }, 00:06:42.727 "method": "bdev_nvme_attach_controller" 00:06:42.727 }, 00:06:42.727 { 00:06:42.728 "method": "bdev_wait_for_examine" 00:06:42.728 } 00:06:42.728 ] 00:06:42.728 } 00:06:42.728 ] 00:06:42.728 } 00:06:42.728 [2024-07-25 13:02:34.808375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.728 [2024-07-25 13:02:34.909475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.986 [2024-07-25 13:02:34.965870] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:43.244  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:43.244 00:06:43.244 13:02:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:43.244 13:02:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:43.244 13:02:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:43.244 13:02:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:06:43.244 13:02:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:43.244 13:02:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:43.244 13:02:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:43.244 [2024-07-25 13:02:35.415027] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:43.244 [2024-07-25 13:02:35.415145] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62883 ] 00:06:43.244 { 00:06:43.244 "subsystems": [ 00:06:43.244 { 00:06:43.244 "subsystem": "bdev", 00:06:43.244 "config": [ 00:06:43.244 { 00:06:43.244 "params": { 00:06:43.244 "trtype": "pcie", 00:06:43.244 "traddr": "0000:00:10.0", 00:06:43.244 "name": "Nvme0" 00:06:43.245 }, 00:06:43.245 "method": "bdev_nvme_attach_controller" 00:06:43.245 }, 00:06:43.245 { 00:06:43.245 "params": { 00:06:43.245 "trtype": "pcie", 00:06:43.245 "traddr": "0000:00:11.0", 00:06:43.245 "name": "Nvme1" 00:06:43.245 }, 00:06:43.245 "method": "bdev_nvme_attach_controller" 00:06:43.245 }, 00:06:43.245 { 00:06:43.245 "method": "bdev_wait_for_examine" 00:06:43.245 } 00:06:43.245 ] 00:06:43.245 } 00:06:43.245 ] 00:06:43.245 } 00:06:43.502 [2024-07-25 13:02:35.554638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.502 [2024-07-25 13:02:35.654184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.766 [2024-07-25 13:02:35.709621] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:44.047  Copying: 65/65 [MB] (average 942 MBps) 00:06:44.047 00:06:44.047 13:02:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:06:44.047 13:02:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:44.047 13:02:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:44.047 13:02:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:44.306 [2024-07-25 13:02:36.268768] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:44.306 [2024-07-25 13:02:36.268890] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62904 ] 00:06:44.306 { 00:06:44.306 "subsystems": [ 00:06:44.306 { 00:06:44.306 "subsystem": "bdev", 00:06:44.306 "config": [ 00:06:44.306 { 00:06:44.306 "params": { 00:06:44.306 "trtype": "pcie", 00:06:44.306 "traddr": "0000:00:10.0", 00:06:44.306 "name": "Nvme0" 00:06:44.306 }, 00:06:44.306 "method": "bdev_nvme_attach_controller" 00:06:44.306 }, 00:06:44.306 { 00:06:44.306 "params": { 00:06:44.306 "trtype": "pcie", 00:06:44.306 "traddr": "0000:00:11.0", 00:06:44.306 "name": "Nvme1" 00:06:44.306 }, 00:06:44.306 "method": "bdev_nvme_attach_controller" 00:06:44.306 }, 00:06:44.306 { 00:06:44.306 "method": "bdev_wait_for_examine" 00:06:44.306 } 00:06:44.306 ] 00:06:44.306 } 00:06:44.306 ] 00:06:44.306 } 00:06:44.306 [2024-07-25 13:02:36.407902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.563 [2024-07-25 13:02:36.514930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.563 [2024-07-25 13:02:36.571739] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:44.820  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:44.820 00:06:44.820 13:02:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:44.820 13:02:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:44.820 00:06:44.820 real 0m3.170s 00:06:44.820 user 0m2.309s 00:06:44.820 sys 0m0.935s 00:06:44.820 13:02:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.820 13:02:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:44.820 ************************************ 00:06:44.820 END TEST dd_offset_magic 00:06:44.820 ************************************ 00:06:44.820 13:02:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:06:44.820 13:02:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:06:44.820 13:02:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:44.820 13:02:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:44.820 13:02:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:44.820 13:02:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:44.820 13:02:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:44.820 13:02:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:06:44.820 13:02:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:44.820 13:02:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:44.820 13:02:37 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:45.078 [2024-07-25 13:02:37.065232] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:45.078 [2024-07-25 13:02:37.065331] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62941 ] 00:06:45.078 { 00:06:45.078 "subsystems": [ 00:06:45.078 { 00:06:45.078 "subsystem": "bdev", 00:06:45.078 "config": [ 00:06:45.078 { 00:06:45.078 "params": { 00:06:45.078 "trtype": "pcie", 00:06:45.078 "traddr": "0000:00:10.0", 00:06:45.078 "name": "Nvme0" 00:06:45.078 }, 00:06:45.078 "method": "bdev_nvme_attach_controller" 00:06:45.078 }, 00:06:45.078 { 00:06:45.078 "params": { 00:06:45.078 "trtype": "pcie", 00:06:45.078 "traddr": "0000:00:11.0", 00:06:45.078 "name": "Nvme1" 00:06:45.078 }, 00:06:45.078 "method": "bdev_nvme_attach_controller" 00:06:45.078 }, 00:06:45.078 { 00:06:45.078 "method": "bdev_wait_for_examine" 00:06:45.078 } 00:06:45.078 ] 00:06:45.078 } 00:06:45.078 ] 00:06:45.078 } 00:06:45.078 [2024-07-25 13:02:37.208215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.336 [2024-07-25 13:02:37.323497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.336 [2024-07-25 13:02:37.382361] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:45.593  Copying: 5120/5120 [kB] (average 1250 MBps) 00:06:45.593 00:06:45.593 13:02:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:06:45.593 13:02:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:06:45.593 13:02:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:45.593 13:02:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:45.593 13:02:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:45.593 13:02:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:45.593 13:02:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:06:45.593 13:02:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:45.593 13:02:37 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:45.850 13:02:37 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:45.850 [2024-07-25 13:02:37.854874] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:45.850 [2024-07-25 13:02:37.855009] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62962 ] 00:06:45.850 { 00:06:45.850 "subsystems": [ 00:06:45.850 { 00:06:45.850 "subsystem": "bdev", 00:06:45.850 "config": [ 00:06:45.850 { 00:06:45.850 "params": { 00:06:45.850 "trtype": "pcie", 00:06:45.850 "traddr": "0000:00:10.0", 00:06:45.850 "name": "Nvme0" 00:06:45.850 }, 00:06:45.850 "method": "bdev_nvme_attach_controller" 00:06:45.850 }, 00:06:45.850 { 00:06:45.850 "params": { 00:06:45.850 "trtype": "pcie", 00:06:45.850 "traddr": "0000:00:11.0", 00:06:45.850 "name": "Nvme1" 00:06:45.850 }, 00:06:45.850 "method": "bdev_nvme_attach_controller" 00:06:45.850 }, 00:06:45.850 { 00:06:45.850 "method": "bdev_wait_for_examine" 00:06:45.850 } 00:06:45.850 ] 00:06:45.850 } 00:06:45.850 ] 00:06:45.850 } 00:06:45.850 [2024-07-25 13:02:37.998312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.109 [2024-07-25 13:02:38.097759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.109 [2024-07-25 13:02:38.151700] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:46.367  Copying: 5120/5120 [kB] (average 833 MBps) 00:06:46.367 00:06:46.367 13:02:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:06:46.633 00:06:46.634 real 0m7.627s 00:06:46.634 user 0m5.679s 00:06:46.634 sys 0m3.551s 00:06:46.634 13:02:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.634 13:02:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:46.634 ************************************ 00:06:46.634 END TEST spdk_dd_bdev_to_bdev 00:06:46.634 ************************************ 00:06:46.634 13:02:38 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:06:46.634 13:02:38 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:46.634 13:02:38 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.634 13:02:38 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.634 13:02:38 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:46.634 ************************************ 00:06:46.634 START TEST spdk_dd_uring 00:06:46.634 ************************************ 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:46.634 * Looking for test storage... 00:06:46.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:46.634 ************************************ 00:06:46.634 START TEST dd_uring_copy 00:06:46.634 ************************************ 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1125 -- # uring_zram_copy 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=g9jtzpwgquhdcd8g50phop83bh00cw9qcvzqztt06bujs9u1ha5v34cwvwum0q167xd7viltxi2s6sz6f38fj983qii7ug627a76u65drteh88jqgi8ut71t8l5o0lhrbm323tzkusdh7pizhbizp9m2ak77friy4vgf1u8uy9cq0ayk25e80my0ujh0lerbez3x67hrb4q64iuusxsffyowrgp4mahuwrs4kyluz90xo81tirn9ln04jg9940ziqr3xf83ut9eir50u2r8xwptrpku9uxqku2xm1wl6vlwbsx37akjrihgm2lsxeqkyaipfi7aatrkk8vpl2d41o07x0ltnue18nrbh4f2to6hekjsub9lzkirs8yuo3666zbqgu4abydy7srl7r9y2wqoquqkccpe520b0j7v4zve5gxo3ubgpo5f136r9pwev7cnsojxze7b28swgah07csoqb1sbkl2ii6uyix7oue1kydths3ftngm5aam74rtnc2tikrvly41uzxu45l71x4h0p6vgxkuz08hymmiopd09l4evwdg1j3mdnd9o2voz8tf8w6vz8ylgnjccuf03lwzd7cct606eoi841y5mg0hxuagujn80ha9gawk7cbtdogdrh7qy20msmtbv4ypmr5epbwscr5pavvwmfcvuowcfwkv6awbzdq9ra40b4d00h4buvptsd7c3f8k46l6ri53lv5i3moqpe7k3ac6x5a8bfgtfu1onah4jmb1motbsdmd4j8vzsczv547sdcmji3ceh9w5rrh5numdqaav5ny92zqaj6ehm2fgvnd36ob6dhp9q7lf1k62gwigjkz38u9am86c6qkqds18h1azaw3mvz28ykut5g1gcawyzvc5pz15ddxw200j22muzley2r1ecpypqp92mzmx2v5k7q8cn0s6wisyp116rf638uc3qmzwvigix16tso3gom4mqh0suhgp1kct3bozxc9oacbxq6y3txyf4787y7v0rtoc 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo g9jtzpwgquhdcd8g50phop83bh00cw9qcvzqztt06bujs9u1ha5v34cwvwum0q167xd7viltxi2s6sz6f38fj983qii7ug627a76u65drteh88jqgi8ut71t8l5o0lhrbm323tzkusdh7pizhbizp9m2ak77friy4vgf1u8uy9cq0ayk25e80my0ujh0lerbez3x67hrb4q64iuusxsffyowrgp4mahuwrs4kyluz90xo81tirn9ln04jg9940ziqr3xf83ut9eir50u2r8xwptrpku9uxqku2xm1wl6vlwbsx37akjrihgm2lsxeqkyaipfi7aatrkk8vpl2d41o07x0ltnue18nrbh4f2to6hekjsub9lzkirs8yuo3666zbqgu4abydy7srl7r9y2wqoquqkccpe520b0j7v4zve5gxo3ubgpo5f136r9pwev7cnsojxze7b28swgah07csoqb1sbkl2ii6uyix7oue1kydths3ftngm5aam74rtnc2tikrvly41uzxu45l71x4h0p6vgxkuz08hymmiopd09l4evwdg1j3mdnd9o2voz8tf8w6vz8ylgnjccuf03lwzd7cct606eoi841y5mg0hxuagujn80ha9gawk7cbtdogdrh7qy20msmtbv4ypmr5epbwscr5pavvwmfcvuowcfwkv6awbzdq9ra40b4d00h4buvptsd7c3f8k46l6ri53lv5i3moqpe7k3ac6x5a8bfgtfu1onah4jmb1motbsdmd4j8vzsczv547sdcmji3ceh9w5rrh5numdqaav5ny92zqaj6ehm2fgvnd36ob6dhp9q7lf1k62gwigjkz38u9am86c6qkqds18h1azaw3mvz28ykut5g1gcawyzvc5pz15ddxw200j22muzley2r1ecpypqp92mzmx2v5k7q8cn0s6wisyp116rf638uc3qmzwvigix16tso3gom4mqh0suhgp1kct3bozxc9oacbxq6y3txyf4787y7v0rtoc 00:06:46.634 13:02:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:06:46.634 [2024-07-25 13:02:38.793979] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:46.634 [2024-07-25 13:02:38.794088] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63032 ] 00:06:46.901 [2024-07-25 13:02:38.931617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.901 [2024-07-25 13:02:39.042610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.158 [2024-07-25 13:02:39.095730] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:47.982  Copying: 511/511 [MB] (average 1442 MBps) 00:06:47.982 00:06:47.982 13:02:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:06:47.982 13:02:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:06:47.982 13:02:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:47.982 13:02:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:47.982 { 00:06:47.982 "subsystems": [ 00:06:47.982 { 00:06:47.982 "subsystem": "bdev", 00:06:47.982 "config": [ 00:06:47.982 { 00:06:47.982 "params": { 00:06:47.982 "block_size": 512, 00:06:47.982 "num_blocks": 1048576, 00:06:47.983 "name": "malloc0" 00:06:47.983 }, 00:06:47.983 "method": "bdev_malloc_create" 00:06:47.983 }, 00:06:47.983 { 00:06:47.983 "params": { 00:06:47.983 "filename": "/dev/zram1", 00:06:47.983 "name": "uring0" 00:06:47.983 }, 00:06:47.983 "method": "bdev_uring_create" 00:06:47.983 }, 00:06:47.983 { 00:06:47.983 "method": "bdev_wait_for_examine" 00:06:47.983 } 00:06:47.983 ] 00:06:47.983 } 00:06:47.983 ] 00:06:47.983 } 00:06:47.983 [2024-07-25 13:02:40.151100] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:47.983 [2024-07-25 13:02:40.151195] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63061 ] 00:06:48.240 [2024-07-25 13:02:40.284207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.240 [2024-07-25 13:02:40.388779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.498 [2024-07-25 13:02:40.444779] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:51.313  Copying: 236/512 [MB] (236 MBps) Copying: 466/512 [MB] (230 MBps) Copying: 512/512 [MB] (average 233 MBps) 00:06:51.313 00:06:51.313 13:02:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:06:51.313 13:02:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:06:51.313 13:02:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:51.313 13:02:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:51.313 [2024-07-25 13:02:43.307944] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:51.313 [2024-07-25 13:02:43.308045] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63106 ] 00:06:51.313 { 00:06:51.313 "subsystems": [ 00:06:51.313 { 00:06:51.313 "subsystem": "bdev", 00:06:51.313 "config": [ 00:06:51.313 { 00:06:51.313 "params": { 00:06:51.313 "block_size": 512, 00:06:51.313 "num_blocks": 1048576, 00:06:51.314 "name": "malloc0" 00:06:51.314 }, 00:06:51.314 "method": "bdev_malloc_create" 00:06:51.314 }, 00:06:51.314 { 00:06:51.314 "params": { 00:06:51.314 "filename": "/dev/zram1", 00:06:51.314 "name": "uring0" 00:06:51.314 }, 00:06:51.314 "method": "bdev_uring_create" 00:06:51.314 }, 00:06:51.314 { 00:06:51.314 "method": "bdev_wait_for_examine" 00:06:51.314 } 00:06:51.314 ] 00:06:51.314 } 00:06:51.314 ] 00:06:51.314 } 00:06:51.314 [2024-07-25 13:02:43.446954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.571 [2024-07-25 13:02:43.561167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.571 [2024-07-25 13:02:43.617130] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:55.010  Copying: 186/512 [MB] (186 MBps) Copying: 358/512 [MB] (172 MBps) Copying: 512/512 [MB] (average 181 MBps) 00:06:55.010 00:06:55.010 13:02:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:06:55.011 13:02:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ g9jtzpwgquhdcd8g50phop83bh00cw9qcvzqztt06bujs9u1ha5v34cwvwum0q167xd7viltxi2s6sz6f38fj983qii7ug627a76u65drteh88jqgi8ut71t8l5o0lhrbm323tzkusdh7pizhbizp9m2ak77friy4vgf1u8uy9cq0ayk25e80my0ujh0lerbez3x67hrb4q64iuusxsffyowrgp4mahuwrs4kyluz90xo81tirn9ln04jg9940ziqr3xf83ut9eir50u2r8xwptrpku9uxqku2xm1wl6vlwbsx37akjrihgm2lsxeqkyaipfi7aatrkk8vpl2d41o07x0ltnue18nrbh4f2to6hekjsub9lzkirs8yuo3666zbqgu4abydy7srl7r9y2wqoquqkccpe520b0j7v4zve5gxo3ubgpo5f136r9pwev7cnsojxze7b28swgah07csoqb1sbkl2ii6uyix7oue1kydths3ftngm5aam74rtnc2tikrvly41uzxu45l71x4h0p6vgxkuz08hymmiopd09l4evwdg1j3mdnd9o2voz8tf8w6vz8ylgnjccuf03lwzd7cct606eoi841y5mg0hxuagujn80ha9gawk7cbtdogdrh7qy20msmtbv4ypmr5epbwscr5pavvwmfcvuowcfwkv6awbzdq9ra40b4d00h4buvptsd7c3f8k46l6ri53lv5i3moqpe7k3ac6x5a8bfgtfu1onah4jmb1motbsdmd4j8vzsczv547sdcmji3ceh9w5rrh5numdqaav5ny92zqaj6ehm2fgvnd36ob6dhp9q7lf1k62gwigjkz38u9am86c6qkqds18h1azaw3mvz28ykut5g1gcawyzvc5pz15ddxw200j22muzley2r1ecpypqp92mzmx2v5k7q8cn0s6wisyp116rf638uc3qmzwvigix16tso3gom4mqh0suhgp1kct3bozxc9oacbxq6y3txyf4787y7v0rtoc == \g\9\j\t\z\p\w\g\q\u\h\d\c\d\8\g\5\0\p\h\o\p\8\3\b\h\0\0\c\w\9\q\c\v\z\q\z\t\t\0\6\b\u\j\s\9\u\1\h\a\5\v\3\4\c\w\v\w\u\m\0\q\1\6\7\x\d\7\v\i\l\t\x\i\2\s\6\s\z\6\f\3\8\f\j\9\8\3\q\i\i\7\u\g\6\2\7\a\7\6\u\6\5\d\r\t\e\h\8\8\j\q\g\i\8\u\t\7\1\t\8\l\5\o\0\l\h\r\b\m\3\2\3\t\z\k\u\s\d\h\7\p\i\z\h\b\i\z\p\9\m\2\a\k\7\7\f\r\i\y\4\v\g\f\1\u\8\u\y\9\c\q\0\a\y\k\2\5\e\8\0\m\y\0\u\j\h\0\l\e\r\b\e\z\3\x\6\7\h\r\b\4\q\6\4\i\u\u\s\x\s\f\f\y\o\w\r\g\p\4\m\a\h\u\w\r\s\4\k\y\l\u\z\9\0\x\o\8\1\t\i\r\n\9\l\n\0\4\j\g\9\9\4\0\z\i\q\r\3\x\f\8\3\u\t\9\e\i\r\5\0\u\2\r\8\x\w\p\t\r\p\k\u\9\u\x\q\k\u\2\x\m\1\w\l\6\v\l\w\b\s\x\3\7\a\k\j\r\i\h\g\m\2\l\s\x\e\q\k\y\a\i\p\f\i\7\a\a\t\r\k\k\8\v\p\l\2\d\4\1\o\0\7\x\0\l\t\n\u\e\1\8\n\r\b\h\4\f\2\t\o\6\h\e\k\j\s\u\b\9\l\z\k\i\r\s\8\y\u\o\3\6\6\6\z\b\q\g\u\4\a\b\y\d\y\7\s\r\l\7\r\9\y\2\w\q\o\q\u\q\k\c\c\p\e\5\2\0\b\0\j\7\v\4\z\v\e\5\g\x\o\3\u\b\g\p\o\5\f\1\3\6\r\9\p\w\e\v\7\c\n\s\o\j\x\z\e\7\b\2\8\s\w\g\a\h\0\7\c\s\o\q\b\1\s\b\k\l\2\i\i\6\u\y\i\x\7\o\u\e\1\k\y\d\t\h\s\3\f\t\n\g\m\5\a\a\m\7\4\r\t\n\c\2\t\i\k\r\v\l\y\4\1\u\z\x\u\4\5\l\7\1\x\4\h\0\p\6\v\g\x\k\u\z\0\8\h\y\m\m\i\o\p\d\0\9\l\4\e\v\w\d\g\1\j\3\m\d\n\d\9\o\2\v\o\z\8\t\f\8\w\6\v\z\8\y\l\g\n\j\c\c\u\f\0\3\l\w\z\d\7\c\c\t\6\0\6\e\o\i\8\4\1\y\5\m\g\0\h\x\u\a\g\u\j\n\8\0\h\a\9\g\a\w\k\7\c\b\t\d\o\g\d\r\h\7\q\y\2\0\m\s\m\t\b\v\4\y\p\m\r\5\e\p\b\w\s\c\r\5\p\a\v\v\w\m\f\c\v\u\o\w\c\f\w\k\v\6\a\w\b\z\d\q\9\r\a\4\0\b\4\d\0\0\h\4\b\u\v\p\t\s\d\7\c\3\f\8\k\4\6\l\6\r\i\5\3\l\v\5\i\3\m\o\q\p\e\7\k\3\a\c\6\x\5\a\8\b\f\g\t\f\u\1\o\n\a\h\4\j\m\b\1\m\o\t\b\s\d\m\d\4\j\8\v\z\s\c\z\v\5\4\7\s\d\c\m\j\i\3\c\e\h\9\w\5\r\r\h\5\n\u\m\d\q\a\a\v\5\n\y\9\2\z\q\a\j\6\e\h\m\2\f\g\v\n\d\3\6\o\b\6\d\h\p\9\q\7\l\f\1\k\6\2\g\w\i\g\j\k\z\3\8\u\9\a\m\8\6\c\6\q\k\q\d\s\1\8\h\1\a\z\a\w\3\m\v\z\2\8\y\k\u\t\5\g\1\g\c\a\w\y\z\v\c\5\p\z\1\5\d\d\x\w\2\0\0\j\2\2\m\u\z\l\e\y\2\r\1\e\c\p\y\p\q\p\9\2\m\z\m\x\2\v\5\k\7\q\8\c\n\0\s\6\w\i\s\y\p\1\1\6\r\f\6\3\8\u\c\3\q\m\z\w\v\i\g\i\x\1\6\t\s\o\3\g\o\m\4\m\q\h\0\s\u\h\g\p\1\k\c\t\3\b\o\z\x\c\9\o\a\c\b\x\q\6\y\3\t\x\y\f\4\7\8\7\y\7\v\0\r\t\o\c ]] 00:06:55.011 13:02:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:06:55.011 13:02:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ g9jtzpwgquhdcd8g50phop83bh00cw9qcvzqztt06bujs9u1ha5v34cwvwum0q167xd7viltxi2s6sz6f38fj983qii7ug627a76u65drteh88jqgi8ut71t8l5o0lhrbm323tzkusdh7pizhbizp9m2ak77friy4vgf1u8uy9cq0ayk25e80my0ujh0lerbez3x67hrb4q64iuusxsffyowrgp4mahuwrs4kyluz90xo81tirn9ln04jg9940ziqr3xf83ut9eir50u2r8xwptrpku9uxqku2xm1wl6vlwbsx37akjrihgm2lsxeqkyaipfi7aatrkk8vpl2d41o07x0ltnue18nrbh4f2to6hekjsub9lzkirs8yuo3666zbqgu4abydy7srl7r9y2wqoquqkccpe520b0j7v4zve5gxo3ubgpo5f136r9pwev7cnsojxze7b28swgah07csoqb1sbkl2ii6uyix7oue1kydths3ftngm5aam74rtnc2tikrvly41uzxu45l71x4h0p6vgxkuz08hymmiopd09l4evwdg1j3mdnd9o2voz8tf8w6vz8ylgnjccuf03lwzd7cct606eoi841y5mg0hxuagujn80ha9gawk7cbtdogdrh7qy20msmtbv4ypmr5epbwscr5pavvwmfcvuowcfwkv6awbzdq9ra40b4d00h4buvptsd7c3f8k46l6ri53lv5i3moqpe7k3ac6x5a8bfgtfu1onah4jmb1motbsdmd4j8vzsczv547sdcmji3ceh9w5rrh5numdqaav5ny92zqaj6ehm2fgvnd36ob6dhp9q7lf1k62gwigjkz38u9am86c6qkqds18h1azaw3mvz28ykut5g1gcawyzvc5pz15ddxw200j22muzley2r1ecpypqp92mzmx2v5k7q8cn0s6wisyp116rf638uc3qmzwvigix16tso3gom4mqh0suhgp1kct3bozxc9oacbxq6y3txyf4787y7v0rtoc == \g\9\j\t\z\p\w\g\q\u\h\d\c\d\8\g\5\0\p\h\o\p\8\3\b\h\0\0\c\w\9\q\c\v\z\q\z\t\t\0\6\b\u\j\s\9\u\1\h\a\5\v\3\4\c\w\v\w\u\m\0\q\1\6\7\x\d\7\v\i\l\t\x\i\2\s\6\s\z\6\f\3\8\f\j\9\8\3\q\i\i\7\u\g\6\2\7\a\7\6\u\6\5\d\r\t\e\h\8\8\j\q\g\i\8\u\t\7\1\t\8\l\5\o\0\l\h\r\b\m\3\2\3\t\z\k\u\s\d\h\7\p\i\z\h\b\i\z\p\9\m\2\a\k\7\7\f\r\i\y\4\v\g\f\1\u\8\u\y\9\c\q\0\a\y\k\2\5\e\8\0\m\y\0\u\j\h\0\l\e\r\b\e\z\3\x\6\7\h\r\b\4\q\6\4\i\u\u\s\x\s\f\f\y\o\w\r\g\p\4\m\a\h\u\w\r\s\4\k\y\l\u\z\9\0\x\o\8\1\t\i\r\n\9\l\n\0\4\j\g\9\9\4\0\z\i\q\r\3\x\f\8\3\u\t\9\e\i\r\5\0\u\2\r\8\x\w\p\t\r\p\k\u\9\u\x\q\k\u\2\x\m\1\w\l\6\v\l\w\b\s\x\3\7\a\k\j\r\i\h\g\m\2\l\s\x\e\q\k\y\a\i\p\f\i\7\a\a\t\r\k\k\8\v\p\l\2\d\4\1\o\0\7\x\0\l\t\n\u\e\1\8\n\r\b\h\4\f\2\t\o\6\h\e\k\j\s\u\b\9\l\z\k\i\r\s\8\y\u\o\3\6\6\6\z\b\q\g\u\4\a\b\y\d\y\7\s\r\l\7\r\9\y\2\w\q\o\q\u\q\k\c\c\p\e\5\2\0\b\0\j\7\v\4\z\v\e\5\g\x\o\3\u\b\g\p\o\5\f\1\3\6\r\9\p\w\e\v\7\c\n\s\o\j\x\z\e\7\b\2\8\s\w\g\a\h\0\7\c\s\o\q\b\1\s\b\k\l\2\i\i\6\u\y\i\x\7\o\u\e\1\k\y\d\t\h\s\3\f\t\n\g\m\5\a\a\m\7\4\r\t\n\c\2\t\i\k\r\v\l\y\4\1\u\z\x\u\4\5\l\7\1\x\4\h\0\p\6\v\g\x\k\u\z\0\8\h\y\m\m\i\o\p\d\0\9\l\4\e\v\w\d\g\1\j\3\m\d\n\d\9\o\2\v\o\z\8\t\f\8\w\6\v\z\8\y\l\g\n\j\c\c\u\f\0\3\l\w\z\d\7\c\c\t\6\0\6\e\o\i\8\4\1\y\5\m\g\0\h\x\u\a\g\u\j\n\8\0\h\a\9\g\a\w\k\7\c\b\t\d\o\g\d\r\h\7\q\y\2\0\m\s\m\t\b\v\4\y\p\m\r\5\e\p\b\w\s\c\r\5\p\a\v\v\w\m\f\c\v\u\o\w\c\f\w\k\v\6\a\w\b\z\d\q\9\r\a\4\0\b\4\d\0\0\h\4\b\u\v\p\t\s\d\7\c\3\f\8\k\4\6\l\6\r\i\5\3\l\v\5\i\3\m\o\q\p\e\7\k\3\a\c\6\x\5\a\8\b\f\g\t\f\u\1\o\n\a\h\4\j\m\b\1\m\o\t\b\s\d\m\d\4\j\8\v\z\s\c\z\v\5\4\7\s\d\c\m\j\i\3\c\e\h\9\w\5\r\r\h\5\n\u\m\d\q\a\a\v\5\n\y\9\2\z\q\a\j\6\e\h\m\2\f\g\v\n\d\3\6\o\b\6\d\h\p\9\q\7\l\f\1\k\6\2\g\w\i\g\j\k\z\3\8\u\9\a\m\8\6\c\6\q\k\q\d\s\1\8\h\1\a\z\a\w\3\m\v\z\2\8\y\k\u\t\5\g\1\g\c\a\w\y\z\v\c\5\p\z\1\5\d\d\x\w\2\0\0\j\2\2\m\u\z\l\e\y\2\r\1\e\c\p\y\p\q\p\9\2\m\z\m\x\2\v\5\k\7\q\8\c\n\0\s\6\w\i\s\y\p\1\1\6\r\f\6\3\8\u\c\3\q\m\z\w\v\i\g\i\x\1\6\t\s\o\3\g\o\m\4\m\q\h\0\s\u\h\g\p\1\k\c\t\3\b\o\z\x\c\9\o\a\c\b\x\q\6\y\3\t\x\y\f\4\7\8\7\y\7\v\0\r\t\o\c ]] 00:06:55.011 13:02:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:55.269 13:02:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:06:55.269 13:02:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:06:55.269 13:02:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:55.269 13:02:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:55.269 [2024-07-25 13:02:47.410742] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:55.269 [2024-07-25 13:02:47.410864] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63166 ] 00:06:55.269 { 00:06:55.269 "subsystems": [ 00:06:55.269 { 00:06:55.269 "subsystem": "bdev", 00:06:55.269 "config": [ 00:06:55.269 { 00:06:55.269 "params": { 00:06:55.269 "block_size": 512, 00:06:55.269 "num_blocks": 1048576, 00:06:55.269 "name": "malloc0" 00:06:55.269 }, 00:06:55.269 "method": "bdev_malloc_create" 00:06:55.269 }, 00:06:55.269 { 00:06:55.269 "params": { 00:06:55.269 "filename": "/dev/zram1", 00:06:55.269 "name": "uring0" 00:06:55.269 }, 00:06:55.269 "method": "bdev_uring_create" 00:06:55.269 }, 00:06:55.269 { 00:06:55.269 "method": "bdev_wait_for_examine" 00:06:55.269 } 00:06:55.269 ] 00:06:55.269 } 00:06:55.269 ] 00:06:55.269 } 00:06:55.528 [2024-07-25 13:02:47.545500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.528 [2024-07-25 13:02:47.660456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.528 [2024-07-25 13:02:47.712892] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:59.284  Copying: 158/512 [MB] (158 MBps) Copying: 326/512 [MB] (168 MBps) Copying: 491/512 [MB] (164 MBps) Copying: 512/512 [MB] (average 163 MBps) 00:06:59.284 00:06:59.284 13:02:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:06:59.284 13:02:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:06:59.284 13:02:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:59.284 13:02:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:59.284 13:02:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:06:59.284 13:02:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:06:59.284 13:02:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:59.284 13:02:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:59.542 [2024-07-25 13:02:51.498021] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:59.542 [2024-07-25 13:02:51.498527] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63222 ] 00:06:59.542 { 00:06:59.542 "subsystems": [ 00:06:59.542 { 00:06:59.542 "subsystem": "bdev", 00:06:59.542 "config": [ 00:06:59.542 { 00:06:59.542 "params": { 00:06:59.542 "block_size": 512, 00:06:59.542 "num_blocks": 1048576, 00:06:59.542 "name": "malloc0" 00:06:59.542 }, 00:06:59.542 "method": "bdev_malloc_create" 00:06:59.542 }, 00:06:59.542 { 00:06:59.542 "params": { 00:06:59.542 "filename": "/dev/zram1", 00:06:59.542 "name": "uring0" 00:06:59.542 }, 00:06:59.542 "method": "bdev_uring_create" 00:06:59.542 }, 00:06:59.542 { 00:06:59.542 "params": { 00:06:59.542 "name": "uring0" 00:06:59.542 }, 00:06:59.542 "method": "bdev_uring_delete" 00:06:59.542 }, 00:06:59.542 { 00:06:59.542 "method": "bdev_wait_for_examine" 00:06:59.542 } 00:06:59.542 ] 00:06:59.542 } 00:06:59.542 ] 00:06:59.542 } 00:06:59.542 [2024-07-25 13:02:51.632364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.800 [2024-07-25 13:02:51.745208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.800 [2024-07-25 13:02:51.799356] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:00.315  Copying: 0/0 [B] (average 0 Bps) 00:07:00.315 00:07:00.315 13:02:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:00.315 13:02:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:00.315 13:02:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:00.315 13:02:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:00.315 13:02:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:07:00.315 13:02:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:00.315 13:02:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:00.315 13:02:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.315 13:02:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.315 13:02:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.315 13:02:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.315 13:02:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.315 13:02:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.315 13:02:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.315 13:02:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:00.315 13:02:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:00.315 [2024-07-25 13:02:52.489354] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:00.315 [2024-07-25 13:02:52.489458] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63251 ] 00:07:00.315 { 00:07:00.315 "subsystems": [ 00:07:00.315 { 00:07:00.315 "subsystem": "bdev", 00:07:00.315 "config": [ 00:07:00.315 { 00:07:00.315 "params": { 00:07:00.315 "block_size": 512, 00:07:00.315 "num_blocks": 1048576, 00:07:00.315 "name": "malloc0" 00:07:00.315 }, 00:07:00.315 "method": "bdev_malloc_create" 00:07:00.315 }, 00:07:00.315 { 00:07:00.315 "params": { 00:07:00.315 "filename": "/dev/zram1", 00:07:00.315 "name": "uring0" 00:07:00.315 }, 00:07:00.315 "method": "bdev_uring_create" 00:07:00.315 }, 00:07:00.315 { 00:07:00.315 "params": { 00:07:00.315 "name": "uring0" 00:07:00.315 }, 00:07:00.315 "method": "bdev_uring_delete" 00:07:00.315 }, 00:07:00.315 { 00:07:00.315 "method": "bdev_wait_for_examine" 00:07:00.315 } 00:07:00.315 ] 00:07:00.315 } 00:07:00.315 ] 00:07:00.315 } 00:07:00.573 [2024-07-25 13:02:52.626403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.573 [2024-07-25 13:02:52.738805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.831 [2024-07-25 13:02:52.794479] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:00.831 [2024-07-25 13:02:53.001505] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:00.831 [2024-07-25 13:02:53.001567] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:00.831 [2024-07-25 13:02:53.001578] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:00.831 [2024-07-25 13:02:53.001587] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:01.396 [2024-07-25 13:02:53.313382] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:01.396 13:02:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:07:01.396 13:02:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.396 13:02:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:07:01.396 13:02:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:07:01.396 13:02:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:07:01.396 13:02:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.396 13:02:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:01.396 13:02:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:07:01.396 13:02:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:07:01.396 13:02:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:07:01.396 13:02:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:07:01.396 13:02:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:01.653 00:07:01.653 real 0m14.913s 00:07:01.653 user 0m10.095s 00:07:01.653 sys 0m12.035s 00:07:01.653 13:02:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.653 ************************************ 00:07:01.653 END TEST dd_uring_copy 00:07:01.653 ************************************ 00:07:01.653 13:02:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:01.653 ************************************ 00:07:01.653 END TEST spdk_dd_uring 00:07:01.653 ************************************ 00:07:01.653 00:07:01.653 real 0m15.050s 00:07:01.653 user 0m10.156s 00:07:01.653 sys 0m12.110s 00:07:01.653 13:02:53 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.653 13:02:53 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:01.653 13:02:53 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:01.653 13:02:53 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:01.653 13:02:53 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.653 13:02:53 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:01.653 ************************************ 00:07:01.653 START TEST spdk_dd_sparse 00:07:01.653 ************************************ 00:07:01.653 13:02:53 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:01.653 * Looking for test storage... 00:07:01.653 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:01.653 13:02:53 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:01.653 13:02:53 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.653 13:02:53 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.653 13:02:53 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.653 13:02:53 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.654 13:02:53 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.654 13:02:53 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.654 13:02:53 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:01.654 13:02:53 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.654 13:02:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:01.654 13:02:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:01.654 13:02:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:01.654 13:02:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:01.654 13:02:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:01.654 13:02:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:01.654 13:02:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:01.654 13:02:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:01.654 13:02:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:01.654 13:02:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:01.654 13:02:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:01.654 1+0 records in 00:07:01.654 1+0 records out 00:07:01.654 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00730764 s, 574 MB/s 00:07:01.654 13:02:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:01.654 1+0 records in 00:07:01.654 1+0 records out 00:07:01.654 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00451826 s, 928 MB/s 00:07:01.654 13:02:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:01.654 1+0 records in 00:07:01.654 1+0 records out 00:07:01.654 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00656243 s, 639 MB/s 00:07:01.654 13:02:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:01.654 13:02:53 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:01.654 13:02:53 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.654 13:02:53 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:01.911 ************************************ 00:07:01.911 START TEST dd_sparse_file_to_file 00:07:01.911 ************************************ 00:07:01.911 13:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # file_to_file 00:07:01.911 13:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:01.911 13:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:01.911 13:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:01.911 13:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:01.911 13:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:01.911 13:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:01.911 13:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:01.911 13:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:01.911 13:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:01.911 13:02:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:01.912 [2024-07-25 13:02:53.898297] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:01.912 [2024-07-25 13:02:53.898412] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63343 ] 00:07:01.912 { 00:07:01.912 "subsystems": [ 00:07:01.912 { 00:07:01.912 "subsystem": "bdev", 00:07:01.912 "config": [ 00:07:01.912 { 00:07:01.912 "params": { 00:07:01.912 "block_size": 4096, 00:07:01.912 "filename": "dd_sparse_aio_disk", 00:07:01.912 "name": "dd_aio" 00:07:01.912 }, 00:07:01.912 "method": "bdev_aio_create" 00:07:01.912 }, 00:07:01.912 { 00:07:01.912 "params": { 00:07:01.912 "lvs_name": "dd_lvstore", 00:07:01.912 "bdev_name": "dd_aio" 00:07:01.912 }, 00:07:01.912 "method": "bdev_lvol_create_lvstore" 00:07:01.912 }, 00:07:01.912 { 00:07:01.912 "method": "bdev_wait_for_examine" 00:07:01.912 } 00:07:01.912 ] 00:07:01.912 } 00:07:01.912 ] 00:07:01.912 } 00:07:01.912 [2024-07-25 13:02:54.034004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.169 [2024-07-25 13:02:54.141180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.169 [2024-07-25 13:02:54.196236] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:02.427  Copying: 12/36 [MB] (average 1000 MBps) 00:07:02.427 00:07:02.427 13:02:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:02.427 13:02:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:02.427 13:02:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:02.427 13:02:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:02.427 13:02:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:02.427 13:02:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:02.427 13:02:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:02.427 13:02:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:02.427 13:02:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:02.427 13:02:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:02.427 00:07:02.427 real 0m0.707s 00:07:02.427 user 0m0.457s 00:07:02.427 sys 0m0.342s 00:07:02.427 13:02:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.427 ************************************ 00:07:02.427 END TEST dd_sparse_file_to_file 00:07:02.427 ************************************ 00:07:02.427 13:02:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:02.427 13:02:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:02.427 13:02:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.427 13:02:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.427 13:02:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:02.427 ************************************ 00:07:02.427 START TEST dd_sparse_file_to_bdev 00:07:02.427 ************************************ 00:07:02.427 13:02:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # file_to_bdev 00:07:02.427 13:02:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:02.427 13:02:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:02.427 13:02:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:02.427 13:02:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:02.427 13:02:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:02.427 13:02:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:02.427 13:02:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:02.427 13:02:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:02.685 [2024-07-25 13:02:54.661548] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:02.686 [2024-07-25 13:02:54.662153] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63385 ] 00:07:02.686 { 00:07:02.686 "subsystems": [ 00:07:02.686 { 00:07:02.686 "subsystem": "bdev", 00:07:02.686 "config": [ 00:07:02.686 { 00:07:02.686 "params": { 00:07:02.686 "block_size": 4096, 00:07:02.686 "filename": "dd_sparse_aio_disk", 00:07:02.686 "name": "dd_aio" 00:07:02.686 }, 00:07:02.686 "method": "bdev_aio_create" 00:07:02.686 }, 00:07:02.686 { 00:07:02.686 "params": { 00:07:02.686 "lvs_name": "dd_lvstore", 00:07:02.686 "lvol_name": "dd_lvol", 00:07:02.686 "size_in_mib": 36, 00:07:02.686 "thin_provision": true 00:07:02.686 }, 00:07:02.686 "method": "bdev_lvol_create" 00:07:02.686 }, 00:07:02.686 { 00:07:02.686 "method": "bdev_wait_for_examine" 00:07:02.686 } 00:07:02.686 ] 00:07:02.686 } 00:07:02.686 ] 00:07:02.686 } 00:07:02.686 [2024-07-25 13:02:54.802049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.944 [2024-07-25 13:02:54.905684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.944 [2024-07-25 13:02:54.959420] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:03.202  Copying: 12/36 [MB] (average 500 MBps) 00:07:03.202 00:07:03.202 00:07:03.202 real 0m0.700s 00:07:03.202 user 0m0.472s 00:07:03.202 sys 0m0.347s 00:07:03.202 ************************************ 00:07:03.202 END TEST dd_sparse_file_to_bdev 00:07:03.202 ************************************ 00:07:03.202 13:02:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.202 13:02:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:03.202 13:02:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:03.202 13:02:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.202 13:02:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.202 13:02:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:03.202 ************************************ 00:07:03.202 START TEST dd_sparse_bdev_to_file 00:07:03.202 ************************************ 00:07:03.202 13:02:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # bdev_to_file 00:07:03.202 13:02:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:03.202 13:02:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:03.202 13:02:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:03.202 13:02:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:03.202 13:02:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:03.202 13:02:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:03.202 13:02:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:03.202 13:02:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:03.460 [2024-07-25 13:02:55.407122] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:03.460 [2024-07-25 13:02:55.407211] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63418 ] 00:07:03.460 { 00:07:03.460 "subsystems": [ 00:07:03.460 { 00:07:03.460 "subsystem": "bdev", 00:07:03.460 "config": [ 00:07:03.460 { 00:07:03.460 "params": { 00:07:03.460 "block_size": 4096, 00:07:03.460 "filename": "dd_sparse_aio_disk", 00:07:03.460 "name": "dd_aio" 00:07:03.460 }, 00:07:03.460 "method": "bdev_aio_create" 00:07:03.460 }, 00:07:03.460 { 00:07:03.460 "method": "bdev_wait_for_examine" 00:07:03.460 } 00:07:03.460 ] 00:07:03.460 } 00:07:03.460 ] 00:07:03.460 } 00:07:03.460 [2024-07-25 13:02:55.540214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.460 [2024-07-25 13:02:55.649521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.719 [2024-07-25 13:02:55.703405] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:03.977  Copying: 12/36 [MB] (average 1090 MBps) 00:07:03.977 00:07:03.977 13:02:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:03.977 13:02:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:03.977 13:02:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:03.977 13:02:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:03.977 13:02:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:03.977 13:02:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:03.977 13:02:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:03.977 13:02:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:03.977 13:02:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:03.977 ************************************ 00:07:03.977 END TEST dd_sparse_bdev_to_file 00:07:03.977 13:02:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:03.977 00:07:03.977 real 0m0.684s 00:07:03.977 user 0m0.440s 00:07:03.977 sys 0m0.343s 00:07:03.977 13:02:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.977 13:02:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:03.977 ************************************ 00:07:03.977 13:02:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:03.977 13:02:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:03.977 13:02:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:03.977 13:02:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:03.977 13:02:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:03.977 00:07:03.977 real 0m2.383s 00:07:03.977 user 0m1.471s 00:07:03.977 sys 0m1.213s 00:07:03.977 ************************************ 00:07:03.977 END TEST spdk_dd_sparse 00:07:03.977 ************************************ 00:07:03.977 13:02:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.977 13:02:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:03.977 13:02:56 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:03.977 13:02:56 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.977 13:02:56 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.977 13:02:56 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:03.977 ************************************ 00:07:03.977 START TEST spdk_dd_negative 00:07:03.977 ************************************ 00:07:03.977 13:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:04.238 * Looking for test storage... 00:07:04.238 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:04.238 13:02:56 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:04.238 13:02:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.238 13:02:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.238 13:02:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.238 13:02:56 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.238 13:02:56 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.238 13:02:56 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.238 13:02:56 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:04.238 13:02:56 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.238 13:02:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:04.238 13:02:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:04.238 13:02:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:04.238 13:02:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:04.238 13:02:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:07:04.238 13:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.238 13:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.238 13:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:04.238 ************************************ 00:07:04.238 START TEST dd_invalid_arguments 00:07:04.238 ************************************ 00:07:04.238 13:02:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # invalid_arguments 00:07:04.238 13:02:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:04.238 13:02:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:07:04.238 13:02:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:04.238 13:02:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.238 13:02:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.238 13:02:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.238 13:02:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.238 13:02:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.238 13:02:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.238 13:02:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.238 13:02:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:04.238 13:02:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:04.238 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:04.238 00:07:04.238 CPU options: 00:07:04.238 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:04.238 (like [0,1,10]) 00:07:04.238 --lcores lcore to CPU mapping list. The list is in the format: 00:07:04.238 [<,lcores[@CPUs]>...] 00:07:04.238 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:04.238 Within the group, '-' is used for range separator, 00:07:04.238 ',' is used for single number separator. 00:07:04.238 '( )' can be omitted for single element group, 00:07:04.238 '@' can be omitted if cpus and lcores have the same value 00:07:04.238 --disable-cpumask-locks Disable CPU core lock files. 00:07:04.238 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:04.238 pollers in the app support interrupt mode) 00:07:04.238 -p, --main-core main (primary) core for DPDK 00:07:04.238 00:07:04.238 Configuration options: 00:07:04.238 -c, --config, --json JSON config file 00:07:04.238 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:04.238 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:04.238 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:04.238 --rpcs-allowed comma-separated list of permitted RPCS 00:07:04.238 --json-ignore-init-errors don't exit on invalid config entry 00:07:04.238 00:07:04.238 Memory options: 00:07:04.238 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:04.238 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:04.238 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:04.238 -R, --huge-unlink unlink huge files after initialization 00:07:04.238 -n, --mem-channels number of memory channels used for DPDK 00:07:04.238 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:04.238 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:04.238 --no-huge run without using hugepages 00:07:04.238 -i, --shm-id shared memory ID (optional) 00:07:04.238 -g, --single-file-segments force creating just one hugetlbfs file 00:07:04.238 00:07:04.238 PCI options: 00:07:04.238 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:04.239 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:04.239 -u, --no-pci disable PCI access 00:07:04.239 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:04.239 00:07:04.239 Log options: 00:07:04.239 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:04.239 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:04.239 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:04.239 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:04.239 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:07:04.239 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:07:04.239 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:07:04.239 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:07:04.239 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:07:04.239 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:07:04.239 virtio_vfio_user, vmd) 00:07:04.239 --silence-noticelog disable notice level logging to stderr 00:07:04.239 00:07:04.239 Trace options: 00:07:04.239 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:04.239 setting 0 to disable trace (default 32768) 00:07:04.239 Tracepoints vary in size and can use more than one trace entry. 00:07:04.239 -e, --tpoint-group [:] 00:07:04.239 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:04.239 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:04.239 [2024-07-25 13:02:56.316935] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:04.239 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:07:04.239 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:04.239 a tracepoint group. First tpoint inside a group can be enabled by 00:07:04.239 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:04.239 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:04.239 in /include/spdk_internal/trace_defs.h 00:07:04.239 00:07:04.239 Other options: 00:07:04.239 -h, --help show this usage 00:07:04.239 -v, --version print SPDK version 00:07:04.239 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:04.239 --env-context Opaque context for use of the env implementation 00:07:04.239 00:07:04.239 Application specific: 00:07:04.239 [--------- DD Options ---------] 00:07:04.239 --if Input file. Must specify either --if or --ib. 00:07:04.239 --ib Input bdev. Must specifier either --if or --ib 00:07:04.239 --of Output file. Must specify either --of or --ob. 00:07:04.239 --ob Output bdev. Must specify either --of or --ob. 00:07:04.239 --iflag Input file flags. 00:07:04.239 --oflag Output file flags. 00:07:04.239 --bs I/O unit size (default: 4096) 00:07:04.239 --qd Queue depth (default: 2) 00:07:04.239 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:04.239 --skip Skip this many I/O units at start of input. (default: 0) 00:07:04.239 --seek Skip this many I/O units at start of output. (default: 0) 00:07:04.239 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:04.239 --sparse Enable hole skipping in input target 00:07:04.239 Available iflag and oflag values: 00:07:04.239 append - append mode 00:07:04.239 direct - use direct I/O for data 00:07:04.239 directory - fail unless a directory 00:07:04.239 dsync - use synchronized I/O for data 00:07:04.239 noatime - do not update access time 00:07:04.239 noctty - do not assign controlling terminal from file 00:07:04.239 nofollow - do not follow symlinks 00:07:04.239 nonblock - use non-blocking I/O 00:07:04.239 sync - use synchronized I/O for data and metadata 00:07:04.239 13:02:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:07:04.239 13:02:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.239 13:02:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:04.239 13:02:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.239 00:07:04.239 real 0m0.076s 00:07:04.239 user 0m0.043s 00:07:04.239 sys 0m0.032s 00:07:04.239 13:02:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.239 13:02:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:04.239 ************************************ 00:07:04.239 END TEST dd_invalid_arguments 00:07:04.239 ************************************ 00:07:04.239 13:02:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:07:04.239 13:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.239 13:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.239 13:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:04.239 ************************************ 00:07:04.239 START TEST dd_double_input 00:07:04.239 ************************************ 00:07:04.239 13:02:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1125 -- # double_input 00:07:04.239 13:02:56 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:04.239 13:02:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:07:04.239 13:02:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:04.239 13:02:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.239 13:02:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.239 13:02:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.239 13:02:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.239 13:02:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.239 13:02:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.239 13:02:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.239 13:02:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:04.239 13:02:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:04.498 [2024-07-25 13:02:56.447566] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:04.498 13:02:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:07:04.498 13:02:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.498 13:02:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:04.498 13:02:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.498 00:07:04.498 real 0m0.077s 00:07:04.498 user 0m0.046s 00:07:04.498 sys 0m0.030s 00:07:04.498 13:02:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.498 13:02:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:04.498 ************************************ 00:07:04.498 END TEST dd_double_input 00:07:04.498 ************************************ 00:07:04.498 13:02:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:07:04.498 13:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.498 13:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.498 13:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:04.498 ************************************ 00:07:04.498 START TEST dd_double_output 00:07:04.498 ************************************ 00:07:04.498 13:02:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # double_output 00:07:04.498 13:02:56 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:04.498 13:02:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:07:04.498 13:02:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:04.498 13:02:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.498 13:02:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.498 13:02:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.498 13:02:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.498 13:02:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.498 13:02:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.498 13:02:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.498 13:02:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:04.498 13:02:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:04.498 [2024-07-25 13:02:56.577093] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:04.498 13:02:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:07:04.499 13:02:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.499 13:02:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:04.499 13:02:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.499 00:07:04.499 real 0m0.077s 00:07:04.499 user 0m0.049s 00:07:04.499 sys 0m0.027s 00:07:04.499 13:02:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.499 13:02:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:04.499 ************************************ 00:07:04.499 END TEST dd_double_output 00:07:04.499 ************************************ 00:07:04.499 13:02:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:07:04.499 13:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.499 13:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.499 13:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:04.499 ************************************ 00:07:04.499 START TEST dd_no_input 00:07:04.499 ************************************ 00:07:04.499 13:02:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # no_input 00:07:04.499 13:02:56 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:04.499 13:02:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:07:04.499 13:02:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:04.499 13:02:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.499 13:02:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.499 13:02:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.499 13:02:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.499 13:02:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.499 13:02:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.499 13:02:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.499 13:02:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:04.499 13:02:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:04.758 [2024-07-25 13:02:56.697521] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.758 00:07:04.758 real 0m0.062s 00:07:04.758 user 0m0.037s 00:07:04.758 sys 0m0.024s 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:04.758 ************************************ 00:07:04.758 END TEST dd_no_input 00:07:04.758 ************************************ 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:04.758 ************************************ 00:07:04.758 START TEST dd_no_output 00:07:04.758 ************************************ 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # no_output 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:04.758 [2024-07-25 13:02:56.815537] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.758 00:07:04.758 real 0m0.066s 00:07:04.758 user 0m0.041s 00:07:04.758 sys 0m0.024s 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:04.758 ************************************ 00:07:04.758 END TEST dd_no_output 00:07:04.758 ************************************ 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:04.758 ************************************ 00:07:04.758 START TEST dd_wrong_blocksize 00:07:04.758 ************************************ 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # wrong_blocksize 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.758 13:02:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.759 13:02:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.759 13:02:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.759 13:02:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.759 13:02:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.759 13:02:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.759 13:02:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:04.759 13:02:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:04.759 [2024-07-25 13:02:56.928605] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:04.759 13:02:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:07:04.759 13:02:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.759 13:02:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:04.759 13:02:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.759 00:07:04.759 real 0m0.064s 00:07:04.759 user 0m0.037s 00:07:04.759 sys 0m0.025s 00:07:04.759 13:02:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.759 13:02:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:04.759 ************************************ 00:07:04.759 END TEST dd_wrong_blocksize 00:07:04.759 ************************************ 00:07:05.018 13:02:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:05.018 13:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.018 13:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.018 13:02:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:05.018 ************************************ 00:07:05.018 START TEST dd_smaller_blocksize 00:07:05.018 ************************************ 00:07:05.018 13:02:56 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # smaller_blocksize 00:07:05.018 13:02:56 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:05.018 13:02:56 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:07:05.018 13:02:56 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:05.018 13:02:56 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.018 13:02:56 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.018 13:02:56 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.018 13:02:56 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.018 13:02:56 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.018 13:02:56 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.018 13:02:56 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.018 13:02:56 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:05.018 13:02:56 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:05.018 [2024-07-25 13:02:57.051229] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:05.018 [2024-07-25 13:02:57.051338] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63642 ] 00:07:05.018 [2024-07-25 13:02:57.190983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.277 [2024-07-25 13:02:57.300699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.277 [2024-07-25 13:02:57.354011] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:05.535 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:05.535 [2024-07-25 13:02:57.649218] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:05.535 [2024-07-25 13:02:57.649262] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:05.794 [2024-07-25 13:02:57.764239] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:05.794 13:02:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:07:05.794 13:02:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:05.794 13:02:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:07:05.794 13:02:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:07:05.794 13:02:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:07:05.794 13:02:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:05.794 ************************************ 00:07:05.794 END TEST dd_smaller_blocksize 00:07:05.794 ************************************ 00:07:05.794 00:07:05.794 real 0m0.874s 00:07:05.794 user 0m0.418s 00:07:05.794 sys 0m0.348s 00:07:05.794 13:02:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.794 13:02:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:05.794 13:02:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:07:05.794 13:02:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.794 13:02:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.794 13:02:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:05.794 ************************************ 00:07:05.794 START TEST dd_invalid_count 00:07:05.794 ************************************ 00:07:05.794 13:02:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # invalid_count 00:07:05.794 13:02:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:05.794 13:02:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:07:05.794 13:02:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:05.794 13:02:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.794 13:02:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.794 13:02:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.794 13:02:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.794 13:02:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.794 13:02:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.794 13:02:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.794 13:02:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:05.794 13:02:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:05.794 [2024-07-25 13:02:57.962568] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:05.794 13:02:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:07:05.794 13:02:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:05.794 13:02:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:05.795 13:02:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:05.795 00:07:05.795 real 0m0.058s 00:07:05.795 user 0m0.041s 00:07:05.795 sys 0m0.017s 00:07:05.795 13:02:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.795 13:02:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:05.795 ************************************ 00:07:05.795 END TEST dd_invalid_count 00:07:05.795 ************************************ 00:07:06.052 13:02:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:07:06.052 13:02:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.052 13:02:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.052 13:02:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:06.052 ************************************ 00:07:06.052 START TEST dd_invalid_oflag 00:07:06.052 ************************************ 00:07:06.052 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # invalid_oflag 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:06.053 [2024-07-25 13:02:58.077301] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:06.053 00:07:06.053 real 0m0.063s 00:07:06.053 user 0m0.036s 00:07:06.053 sys 0m0.026s 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:06.053 ************************************ 00:07:06.053 END TEST dd_invalid_oflag 00:07:06.053 ************************************ 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:06.053 ************************************ 00:07:06.053 START TEST dd_invalid_iflag 00:07:06.053 ************************************ 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # invalid_iflag 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:06.053 [2024-07-25 13:02:58.192043] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:06.053 00:07:06.053 real 0m0.057s 00:07:06.053 user 0m0.035s 00:07:06.053 sys 0m0.022s 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.053 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:06.053 ************************************ 00:07:06.053 END TEST dd_invalid_iflag 00:07:06.053 ************************************ 00:07:06.311 13:02:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:07:06.311 13:02:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.311 13:02:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.311 13:02:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:06.311 ************************************ 00:07:06.311 START TEST dd_unknown_flag 00:07:06.311 ************************************ 00:07:06.311 13:02:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # unknown_flag 00:07:06.311 13:02:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:06.311 13:02:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:07:06.311 13:02:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:06.311 13:02:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.311 13:02:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.311 13:02:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.311 13:02:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.311 13:02:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.311 13:02:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.311 13:02:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.312 13:02:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:06.312 13:02:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:06.312 [2024-07-25 13:02:58.326420] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:06.312 [2024-07-25 13:02:58.326570] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63734 ] 00:07:06.312 [2024-07-25 13:02:58.466586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.569 [2024-07-25 13:02:58.576284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.569 [2024-07-25 13:02:58.629397] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:06.569 [2024-07-25 13:02:58.662649] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:06.569 [2024-07-25 13:02:58.662722] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.569 [2024-07-25 13:02:58.662781] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:06.569 [2024-07-25 13:02:58.662795] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.569 [2024-07-25 13:02:58.663045] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:06.569 [2024-07-25 13:02:58.663065] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.569 [2024-07-25 13:02:58.663122] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:06.569 [2024-07-25 13:02:58.663138] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:06.826 [2024-07-25 13:02:58.773689] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:06.826 13:02:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:07:06.826 13:02:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:06.826 13:02:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:07:06.826 13:02:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:07:06.826 13:02:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:07:06.826 13:02:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:06.826 00:07:06.826 real 0m0.618s 00:07:06.827 user 0m0.369s 00:07:06.827 sys 0m0.152s 00:07:06.827 13:02:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.827 13:02:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:06.827 ************************************ 00:07:06.827 END TEST dd_unknown_flag 00:07:06.827 ************************************ 00:07:06.827 13:02:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:07:06.827 13:02:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.827 13:02:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.827 13:02:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:06.827 ************************************ 00:07:06.827 START TEST dd_invalid_json 00:07:06.827 ************************************ 00:07:06.827 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # invalid_json 00:07:06.827 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:06.827 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:07:06.827 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:07:06.827 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:06.827 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.827 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.827 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.827 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.827 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.827 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.827 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.827 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:06.827 13:02:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:06.827 [2024-07-25 13:02:58.985509] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:06.827 [2024-07-25 13:02:58.985622] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63768 ] 00:07:07.084 [2024-07-25 13:02:59.123604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.084 [2024-07-25 13:02:59.226267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.084 [2024-07-25 13:02:59.226364] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:07.084 [2024-07-25 13:02:59.226383] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:07.084 [2024-07-25 13:02:59.226395] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.084 [2024-07-25 13:02:59.226443] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:07.340 13:02:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:07:07.340 13:02:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:07.340 13:02:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:07:07.340 13:02:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:07:07.340 13:02:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:07:07.341 13:02:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:07.341 00:07:07.341 real 0m0.394s 00:07:07.341 user 0m0.220s 00:07:07.341 sys 0m0.071s 00:07:07.341 13:02:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.341 13:02:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:07.341 ************************************ 00:07:07.341 END TEST dd_invalid_json 00:07:07.341 ************************************ 00:07:07.341 00:07:07.341 real 0m3.208s 00:07:07.341 user 0m1.612s 00:07:07.341 sys 0m1.230s 00:07:07.341 13:02:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.341 13:02:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:07.341 ************************************ 00:07:07.341 END TEST spdk_dd_negative 00:07:07.341 ************************************ 00:07:07.341 00:07:07.341 real 1m18.239s 00:07:07.341 user 0m50.972s 00:07:07.341 sys 0m33.460s 00:07:07.341 13:02:59 spdk_dd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.341 13:02:59 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:07.341 ************************************ 00:07:07.341 END TEST spdk_dd 00:07:07.341 ************************************ 00:07:07.341 13:02:59 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:07:07.341 13:02:59 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:07.341 13:02:59 -- spdk/autotest.sh@264 -- # timing_exit lib 00:07:07.341 13:02:59 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:07.341 13:02:59 -- common/autotest_common.sh@10 -- # set +x 00:07:07.341 13:02:59 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:07:07.341 13:02:59 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:07:07.341 13:02:59 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:07:07.341 13:02:59 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:07:07.341 13:02:59 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:07:07.341 13:02:59 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:07:07.341 13:02:59 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:07.341 13:02:59 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:07.341 13:02:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.341 13:02:59 -- common/autotest_common.sh@10 -- # set +x 00:07:07.341 ************************************ 00:07:07.341 START TEST nvmf_tcp 00:07:07.341 ************************************ 00:07:07.341 13:02:59 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:07.599 * Looking for test storage... 00:07:07.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:07.599 13:02:59 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:07.599 13:02:59 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:07.599 13:02:59 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:07.599 13:02:59 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:07.599 13:02:59 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.599 13:02:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:07.599 ************************************ 00:07:07.599 START TEST nvmf_target_core 00:07:07.599 ************************************ 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:07.599 * Looking for test storage... 00:07:07.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:07.599 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:07.600 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:07.600 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:07.600 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:07.600 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:07:07.600 13:02:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:07.600 13:02:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:07.600 13:02:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.600 13:02:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:07.600 ************************************ 00:07:07.600 START TEST nvmf_host_management 00:07:07.600 ************************************ 00:07:07.600 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:07.600 * Looking for test storage... 00:07:07.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:07.600 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.858 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:07.859 Cannot find device "nvmf_init_br" 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:07.859 Cannot find device "nvmf_tgt_br" 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:07.859 Cannot find device "nvmf_tgt_br2" 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:07.859 Cannot find device "nvmf_init_br" 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:07.859 Cannot find device "nvmf_tgt_br" 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:07.859 Cannot find device "nvmf_tgt_br2" 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:07.859 Cannot find device "nvmf_br" 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:07.859 Cannot find device "nvmf_init_if" 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:07.859 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:07.859 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:07.859 13:02:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:07.859 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:07.859 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:07.859 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:07.859 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:07.859 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:07.859 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:08.118 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:08.118 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:07:08.118 00:07:08.118 --- 10.0.0.2 ping statistics --- 00:07:08.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.118 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:08.118 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:08.118 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:07:08.118 00:07:08.118 --- 10.0.0.3 ping statistics --- 00:07:08.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.118 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:08.118 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:08.118 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:07:08.118 00:07:08.118 --- 10.0.0.1 ping statistics --- 00:07:08.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.118 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=64049 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 64049 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 64049 ']' 00:07:08.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.118 13:03:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.118 [2024-07-25 13:03:00.263116] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:08.118 [2024-07-25 13:03:00.263214] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.376 [2024-07-25 13:03:00.408792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:08.376 [2024-07-25 13:03:00.536812] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:08.376 [2024-07-25 13:03:00.537395] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:08.376 [2024-07-25 13:03:00.537709] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:08.376 [2024-07-25 13:03:00.538155] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:08.376 [2024-07-25 13:03:00.538367] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:08.376 [2024-07-25 13:03:00.538602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.376 [2024-07-25 13:03:00.539089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.376 [2024-07-25 13:03:00.539220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:08.376 [2024-07-25 13:03:00.539311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.634 [2024-07-25 13:03:00.596672] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:09.200 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.200 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:09.200 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:09.200 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:09.200 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.200 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:09.200 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:09.200 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.200 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.200 [2024-07-25 13:03:01.250310] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:09.200 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.200 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:09.200 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:09.200 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.200 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:09.200 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:09.200 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:09.200 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.201 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.201 Malloc0 00:07:09.201 [2024-07-25 13:03:01.341921] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:09.201 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.201 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:09.201 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:09.201 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:09.458 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=64109 00:07:09.458 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 64109 /var/tmp/bdevperf.sock 00:07:09.458 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 64109 ']' 00:07:09.458 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:09.458 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.458 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:09.458 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.458 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:09.458 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:09.458 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.458 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:09.458 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:09.458 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:09.458 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:09.458 { 00:07:09.458 "params": { 00:07:09.458 "name": "Nvme$subsystem", 00:07:09.458 "trtype": "$TEST_TRANSPORT", 00:07:09.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:09.458 "adrfam": "ipv4", 00:07:09.458 "trsvcid": "$NVMF_PORT", 00:07:09.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:09.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:09.458 "hdgst": ${hdgst:-false}, 00:07:09.458 "ddgst": ${ddgst:-false} 00:07:09.458 }, 00:07:09.458 "method": "bdev_nvme_attach_controller" 00:07:09.458 } 00:07:09.458 EOF 00:07:09.458 )") 00:07:09.458 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:09.458 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:09.458 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:09.458 13:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:09.458 "params": { 00:07:09.458 "name": "Nvme0", 00:07:09.458 "trtype": "tcp", 00:07:09.458 "traddr": "10.0.0.2", 00:07:09.458 "adrfam": "ipv4", 00:07:09.458 "trsvcid": "4420", 00:07:09.458 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:09.458 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:09.458 "hdgst": false, 00:07:09.458 "ddgst": false 00:07:09.458 }, 00:07:09.458 "method": "bdev_nvme_attach_controller" 00:07:09.458 }' 00:07:09.458 [2024-07-25 13:03:01.443013] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:09.458 [2024-07-25 13:03:01.443256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64109 ] 00:07:09.458 [2024-07-25 13:03:01.607900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.715 [2024-07-25 13:03:01.736298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.716 [2024-07-25 13:03:01.802852] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:09.973 Running I/O for 10 seconds... 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.538 13:03:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:10.538 [2024-07-25 13:03:02.543818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.538 [2024-07-25 13:03:02.543890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.538 [2024-07-25 13:03:02.543916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.538 [2024-07-25 13:03:02.543928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.538 [2024-07-25 13:03:02.543940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.538 [2024-07-25 13:03:02.543951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.538 [2024-07-25 13:03:02.543962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.538 [2024-07-25 13:03:02.543972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.538 [2024-07-25 13:03:02.543984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.538 [2024-07-25 13:03:02.543994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.538 [2024-07-25 13:03:02.544005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.538 [2024-07-25 13:03:02.544015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.544870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.544880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.539 [2024-07-25 13:03:02.545220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.539 [2024-07-25 13:03:02.545442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.540 [2024-07-25 13:03:02.545620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.540 [2024-07-25 13:03:02.545744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.540 [2024-07-25 13:03:02.545932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.540 [2024-07-25 13:03:02.546078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 ctask offset: 114688 on job bdev=Nvme0n1 fails 00:07:10.540 00:07:10.540 Latency(us) 00:07:10.540 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:10.540 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:10.540 Job: Nvme0n1 ended in about 0.62 seconds with error 00:07:10.540 Verification LBA range: start 0x0 length 0x400 00:07:10.540 Nvme0n1 : 0.62 1442.46 90.15 103.03 0.00 40126.07 3500.22 41943.04 00:07:10.540 =================================================================================================================== 00:07:10.540 Total : 1442.46 90.15 103.03 0.00 40126.07 3500.22 41943.04 00:07:10.540 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.540 [2024-07-25 13:03:02.546207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.540 [2024-07-25 13:03:02.546222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.540 [2024-07-25 13:03:02.546234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.540 [2024-07-25 13:03:02.546244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.540 [2024-07-25 13:03:02.546256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.540 [2024-07-25 13:03:02.546265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.540 [2024-07-25 13:03:02.546276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.540 [2024-07-25 13:03:02.546286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.540 [2024-07-25 13:03:02.546298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.540 [2024-07-25 13:03:02.546308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.540 [2024-07-25 13:03:02.546320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.540 [2024-07-25 13:03:02.546329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.540 [2024-07-25 13:03:02.546341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.540 [2024-07-25 13:03:02.546350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.540 [2024-07-25 13:03:02.546362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.540 [2024-07-25 13:03:02.546371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.540 [2024-07-25 13:03:02.546382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.540 [2024-07-25 13:03:02.546391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.540 [2024-07-25 13:03:02.546403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.540 [2024-07-25 13:03:02.546419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.540 [2024-07-25 13:03:02.546431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.540 [2024-07-25 13:03:02.546440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.540 [2024-07-25 13:03:02.546451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.540 [2024-07-25 13:03:02.546460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.540 [2024-07-25 13:03:02.546472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.540 [2024-07-25 13:03:02.546483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.540 [2024-07-25 13:03:02.546494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.540 [2024-07-25 13:03:02.546503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.540 [2024-07-25 13:03:02.546514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.540 [2024-07-25 13:03:02.546524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.540 [2024-07-25 13:03:02.546536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.540 [2024-07-25 13:03:02.546545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.540 [2024-07-25 13:03:02.546556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.540 [2024-07-25 13:03:02.546566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.540 [2024-07-25 13:03:02.546577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.540 [2024-07-25 13:03:02.546586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.540 [2024-07-25 13:03:02.546598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:10.540 [2024-07-25 13:03:02.546607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.540 [2024-07-25 13:03:02.546617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1994ec0 is same with the state(5) to be set 00:07:10.540 [2024-07-25 13:03:02.546685] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1994ec0 was disconnected and freed. reset controller. 00:07:10.540 [2024-07-25 13:03:02.546797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:10.540 [2024-07-25 13:03:02.546814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.540 [2024-07-25 13:03:02.546826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:10.540 [2024-07-25 13:03:02.546850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.540 [2024-07-25 13:03:02.546862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:10.540 [2024-07-25 13:03:02.546871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.540 [2024-07-25 13:03:02.546882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:10.540 [2024-07-25 13:03:02.546892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:10.540 [2024-07-25 13:03:02.546901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198cd50 is same with the state(5) to be set 00:07:10.540 [2024-07-25 13:03:02.547982] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:10.540 [2024-07-25 13:03:02.550098] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:10.540 [2024-07-25 13:03:02.550233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198cd50 (9): Bad file descriptor 00:07:10.540 [2024-07-25 13:03:02.554346] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:11.473 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 64109 00:07:11.473 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (64109) - No such process 00:07:11.473 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:11.473 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:11.473 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:11.473 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:11.473 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:11.473 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:11.474 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:11.474 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:11.474 { 00:07:11.474 "params": { 00:07:11.474 "name": "Nvme$subsystem", 00:07:11.474 "trtype": "$TEST_TRANSPORT", 00:07:11.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:11.474 "adrfam": "ipv4", 00:07:11.474 "trsvcid": "$NVMF_PORT", 00:07:11.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:11.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:11.474 "hdgst": ${hdgst:-false}, 00:07:11.474 "ddgst": ${ddgst:-false} 00:07:11.474 }, 00:07:11.474 "method": "bdev_nvme_attach_controller" 00:07:11.474 } 00:07:11.474 EOF 00:07:11.474 )") 00:07:11.474 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:11.474 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:11.474 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:11.474 13:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:11.474 "params": { 00:07:11.474 "name": "Nvme0", 00:07:11.474 "trtype": "tcp", 00:07:11.474 "traddr": "10.0.0.2", 00:07:11.474 "adrfam": "ipv4", 00:07:11.474 "trsvcid": "4420", 00:07:11.474 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:11.474 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:11.474 "hdgst": false, 00:07:11.474 "ddgst": false 00:07:11.474 }, 00:07:11.474 "method": "bdev_nvme_attach_controller" 00:07:11.474 }' 00:07:11.474 [2024-07-25 13:03:03.596386] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:11.474 [2024-07-25 13:03:03.596657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64147 ] 00:07:11.731 [2024-07-25 13:03:03.729450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.732 [2024-07-25 13:03:03.830432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.732 [2024-07-25 13:03:03.901885] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:11.989 Running I/O for 1 seconds... 00:07:12.925 00:07:12.925 Latency(us) 00:07:12.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:12.925 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:12.925 Verification LBA range: start 0x0 length 0x400 00:07:12.925 Nvme0n1 : 1.03 1609.61 100.60 0.00 0.00 38995.88 3991.74 37176.79 00:07:12.925 =================================================================================================================== 00:07:12.925 Total : 1609.61 100.60 0.00 0.00 38995.88 3991.74 37176.79 00:07:13.183 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:13.183 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:13.183 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:13.183 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:13.183 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:13.183 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:13.183 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:13.183 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:13.183 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:13.183 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:13.183 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:13.183 rmmod nvme_tcp 00:07:13.183 rmmod nvme_fabrics 00:07:13.441 rmmod nvme_keyring 00:07:13.441 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:13.441 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:13.441 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:13.441 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 64049 ']' 00:07:13.441 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 64049 00:07:13.441 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 64049 ']' 00:07:13.441 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 64049 00:07:13.441 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:13.441 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:13.441 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64049 00:07:13.441 killing process with pid 64049 00:07:13.441 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:13.441 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:13.441 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64049' 00:07:13.441 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 64049 00:07:13.441 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 64049 00:07:13.701 [2024-07-25 13:03:05.644638] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:13.701 00:07:13.701 real 0m6.002s 00:07:13.701 user 0m23.237s 00:07:13.701 sys 0m1.509s 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.701 ************************************ 00:07:13.701 END TEST nvmf_host_management 00:07:13.701 ************************************ 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:13.701 ************************************ 00:07:13.701 START TEST nvmf_lvol 00:07:13.701 ************************************ 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:13.701 * Looking for test storage... 00:07:13.701 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:13.701 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:13.702 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:13.702 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:13.702 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:13.702 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:13.702 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.702 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:13.702 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.702 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:13.702 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:13.702 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:13.702 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:13.702 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:13.702 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:13.702 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:13.702 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:13.702 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:13.702 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:13.702 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:13.702 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:13.702 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:13.702 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:13.702 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:13.702 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:13.702 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:13.702 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:13.702 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:13.702 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:13.960 Cannot find device "nvmf_tgt_br" 00:07:13.960 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:07:13.960 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:13.960 Cannot find device "nvmf_tgt_br2" 00:07:13.960 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:07:13.960 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:13.960 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:13.960 Cannot find device "nvmf_tgt_br" 00:07:13.960 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:07:13.960 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:13.960 Cannot find device "nvmf_tgt_br2" 00:07:13.960 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:07:13.960 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:13.960 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:13.960 13:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:13.960 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:13.960 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:13.960 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:13.960 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:13.960 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:13.960 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:13.961 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:13.961 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:13.961 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:13.961 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:13.961 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:13.961 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:13.961 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:13.961 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:13.961 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:13.961 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:13.961 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:13.961 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:13.961 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:13.961 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:13.961 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:13.961 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:13.961 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:13.961 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:13.961 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:13.961 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:14.220 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:14.220 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:14.220 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:14.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:14.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:07:14.220 00:07:14.220 --- 10.0.0.2 ping statistics --- 00:07:14.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.220 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:07:14.220 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:14.220 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:14.220 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:07:14.220 00:07:14.220 --- 10.0.0.3 ping statistics --- 00:07:14.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.220 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:07:14.220 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:14.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:14.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:07:14.220 00:07:14.220 --- 10.0.0.1 ping statistics --- 00:07:14.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.220 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:07:14.220 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:14.220 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:07:14.220 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:14.220 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:14.220 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:14.220 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:14.220 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:14.220 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:14.220 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:14.220 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:14.220 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:14.220 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:14.220 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:14.220 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=64367 00:07:14.220 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:14.220 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 64367 00:07:14.220 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 64367 ']' 00:07:14.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.220 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.220 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.220 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.220 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.220 13:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:14.220 [2024-07-25 13:03:06.270119] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:14.220 [2024-07-25 13:03:06.270205] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.220 [2024-07-25 13:03:06.408385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:14.478 [2024-07-25 13:03:06.510547] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:14.478 [2024-07-25 13:03:06.510609] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:14.478 [2024-07-25 13:03:06.510620] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:14.478 [2024-07-25 13:03:06.510628] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:14.478 [2024-07-25 13:03:06.510634] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:14.478 [2024-07-25 13:03:06.510754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.478 [2024-07-25 13:03:06.511093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.478 [2024-07-25 13:03:06.511105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.478 [2024-07-25 13:03:06.564014] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:15.045 13:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.045 13:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:15.045 13:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:15.045 13:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:15.045 13:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:15.045 13:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:15.045 13:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:15.326 [2024-07-25 13:03:07.447974] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:15.326 13:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:15.597 13:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:15.597 13:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:15.856 13:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:15.856 13:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:16.113 13:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:16.371 13:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f1f20896-f821-43f9-b13a-9bae2f2a847a 00:07:16.371 13:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f1f20896-f821-43f9-b13a-9bae2f2a847a lvol 20 00:07:16.629 13:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=56867b36-c097-4b91-9b7d-13a79038358f 00:07:16.629 13:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:16.887 13:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 56867b36-c097-4b91-9b7d-13a79038358f 00:07:17.145 13:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:17.403 [2024-07-25 13:03:09.450605] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:17.403 13:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:17.660 13:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:17.660 13:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=64437 00:07:17.660 13:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:18.592 13:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 56867b36-c097-4b91-9b7d-13a79038358f MY_SNAPSHOT 00:07:19.157 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1b66e6b7-c7d9-49a8-adde-02d017a3a502 00:07:19.157 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 56867b36-c097-4b91-9b7d-13a79038358f 30 00:07:19.415 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 1b66e6b7-c7d9-49a8-adde-02d017a3a502 MY_CLONE 00:07:19.672 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=794fe05c-8934-47be-8738-fc6f5af800de 00:07:19.672 13:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 794fe05c-8934-47be-8738-fc6f5af800de 00:07:19.929 13:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 64437 00:07:28.085 Initializing NVMe Controllers 00:07:28.085 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:28.085 Controller IO queue size 128, less than required. 00:07:28.085 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:28.085 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:28.085 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:28.085 Initialization complete. Launching workers. 00:07:28.085 ======================================================== 00:07:28.085 Latency(us) 00:07:28.085 Device Information : IOPS MiB/s Average min max 00:07:28.085 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10731.66 41.92 11939.28 554.69 71896.57 00:07:28.085 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10445.77 40.80 12264.97 3352.27 78412.75 00:07:28.085 ======================================================== 00:07:28.085 Total : 21177.43 82.72 12099.93 554.69 78412.75 00:07:28.085 00:07:28.085 13:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:28.342 13:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 56867b36-c097-4b91-9b7d-13a79038358f 00:07:28.599 13:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f1f20896-f821-43f9-b13a-9bae2f2a847a 00:07:28.857 13:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:28.857 13:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:28.857 13:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:28.857 13:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:28.857 13:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:07:28.857 13:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:28.857 13:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:07:28.857 13:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:28.857 13:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:28.857 rmmod nvme_tcp 00:07:28.857 rmmod nvme_fabrics 00:07:28.857 rmmod nvme_keyring 00:07:28.857 13:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:28.857 13:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:07:28.857 13:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:07:28.857 13:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 64367 ']' 00:07:28.857 13:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 64367 00:07:28.857 13:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 64367 ']' 00:07:28.857 13:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 64367 00:07:28.857 13:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:28.857 13:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:28.857 13:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64367 00:07:28.857 13:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:28.857 13:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:28.857 13:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64367' 00:07:28.857 killing process with pid 64367 00:07:28.857 13:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 64367 00:07:28.857 13:03:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 64367 00:07:29.115 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:29.115 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:29.115 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:29.115 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:29.115 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:29.115 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.115 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.115 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.115 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:29.115 00:07:29.115 real 0m15.500s 00:07:29.115 user 1m4.641s 00:07:29.115 sys 0m4.171s 00:07:29.115 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.115 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:29.115 ************************************ 00:07:29.115 END TEST nvmf_lvol 00:07:29.115 ************************************ 00:07:29.115 13:03:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:29.115 13:03:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:29.115 13:03:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.115 13:03:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:29.374 ************************************ 00:07:29.374 START TEST nvmf_lvs_grow 00:07:29.374 ************************************ 00:07:29.374 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:29.374 * Looking for test storage... 00:07:29.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:29.374 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:29.374 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:29.374 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.374 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.374 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.374 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.374 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.374 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.374 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.374 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.374 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.374 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.374 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:07:29.374 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:07:29.374 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.374 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.374 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:29.374 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.374 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:29.374 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.374 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.374 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.374 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.374 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:29.375 Cannot find device "nvmf_tgt_br" 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:29.375 Cannot find device "nvmf_tgt_br2" 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:29.375 Cannot find device "nvmf_tgt_br" 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:29.375 Cannot find device "nvmf_tgt_br2" 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:29.375 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:29.375 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:29.375 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:29.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:29.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:07:29.635 00:07:29.635 --- 10.0.0.2 ping statistics --- 00:07:29.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.635 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:29.635 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:29.635 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:07:29.635 00:07:29.635 --- 10.0.0.3 ping statistics --- 00:07:29.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.635 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:29.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:29.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:07:29.635 00:07:29.635 --- 10.0.0.1 ping statistics --- 00:07:29.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.635 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=64762 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 64762 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 64762 ']' 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.635 13:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:29.635 [2024-07-25 13:03:21.808059] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:29.635 [2024-07-25 13:03:21.808182] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.894 [2024-07-25 13:03:21.945418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.894 [2024-07-25 13:03:22.058696] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:29.894 [2024-07-25 13:03:22.058764] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:29.894 [2024-07-25 13:03:22.058776] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:29.894 [2024-07-25 13:03:22.058785] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:29.894 [2024-07-25 13:03:22.058793] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:29.894 [2024-07-25 13:03:22.058822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.152 [2024-07-25 13:03:22.115978] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:30.718 13:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:30.718 13:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:30.718 13:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:30.718 13:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:30.718 13:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:30.718 13:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:30.718 13:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:30.977 [2024-07-25 13:03:23.012588] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.977 13:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:30.977 13:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:30.977 13:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.977 13:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:30.977 ************************************ 00:07:30.977 START TEST lvs_grow_clean 00:07:30.977 ************************************ 00:07:30.977 13:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:30.977 13:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:30.977 13:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:30.977 13:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:30.977 13:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:30.977 13:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:30.977 13:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:30.977 13:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:30.977 13:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:30.977 13:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:31.543 13:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:31.543 13:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:31.543 13:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=dab11525-2442-43f6-b35a-a6f4f038c232 00:07:31.543 13:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dab11525-2442-43f6-b35a-a6f4f038c232 00:07:31.543 13:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:31.801 13:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:31.801 13:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:31.801 13:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dab11525-2442-43f6-b35a-a6f4f038c232 lvol 150 00:07:32.059 13:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=068c31b9-9084-4e67-ad49-d8558690531b 00:07:32.059 13:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:32.059 13:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:32.315 [2024-07-25 13:03:24.429636] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:32.315 [2024-07-25 13:03:24.429756] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:32.315 true 00:07:32.315 13:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dab11525-2442-43f6-b35a-a6f4f038c232 00:07:32.315 13:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:32.573 13:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:32.573 13:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:32.830 13:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 068c31b9-9084-4e67-ad49-d8558690531b 00:07:33.088 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:33.345 [2024-07-25 13:03:25.410419] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:33.345 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:33.603 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=64850 00:07:33.604 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:33.604 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:33.604 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 64850 /var/tmp/bdevperf.sock 00:07:33.604 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 64850 ']' 00:07:33.604 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:33.604 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:33.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:33.604 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:33.604 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:33.604 13:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:33.604 [2024-07-25 13:03:25.786299] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:33.604 [2024-07-25 13:03:25.786441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64850 ] 00:07:33.861 [2024-07-25 13:03:25.930850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.118 [2024-07-25 13:03:26.061294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.118 [2024-07-25 13:03:26.121653] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:34.683 13:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:34.683 13:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:07:34.683 13:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:34.941 Nvme0n1 00:07:34.941 13:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:35.199 [ 00:07:35.199 { 00:07:35.199 "name": "Nvme0n1", 00:07:35.199 "aliases": [ 00:07:35.199 "068c31b9-9084-4e67-ad49-d8558690531b" 00:07:35.199 ], 00:07:35.199 "product_name": "NVMe disk", 00:07:35.199 "block_size": 4096, 00:07:35.199 "num_blocks": 38912, 00:07:35.199 "uuid": "068c31b9-9084-4e67-ad49-d8558690531b", 00:07:35.199 "assigned_rate_limits": { 00:07:35.199 "rw_ios_per_sec": 0, 00:07:35.199 "rw_mbytes_per_sec": 0, 00:07:35.199 "r_mbytes_per_sec": 0, 00:07:35.199 "w_mbytes_per_sec": 0 00:07:35.199 }, 00:07:35.199 "claimed": false, 00:07:35.199 "zoned": false, 00:07:35.199 "supported_io_types": { 00:07:35.199 "read": true, 00:07:35.199 "write": true, 00:07:35.199 "unmap": true, 00:07:35.199 "flush": true, 00:07:35.199 "reset": true, 00:07:35.199 "nvme_admin": true, 00:07:35.199 "nvme_io": true, 00:07:35.199 "nvme_io_md": false, 00:07:35.199 "write_zeroes": true, 00:07:35.199 "zcopy": false, 00:07:35.199 "get_zone_info": false, 00:07:35.199 "zone_management": false, 00:07:35.199 "zone_append": false, 00:07:35.199 "compare": true, 00:07:35.199 "compare_and_write": true, 00:07:35.199 "abort": true, 00:07:35.199 "seek_hole": false, 00:07:35.199 "seek_data": false, 00:07:35.199 "copy": true, 00:07:35.199 "nvme_iov_md": false 00:07:35.199 }, 00:07:35.199 "memory_domains": [ 00:07:35.199 { 00:07:35.199 "dma_device_id": "system", 00:07:35.199 "dma_device_type": 1 00:07:35.199 } 00:07:35.199 ], 00:07:35.199 "driver_specific": { 00:07:35.199 "nvme": [ 00:07:35.199 { 00:07:35.199 "trid": { 00:07:35.199 "trtype": "TCP", 00:07:35.199 "adrfam": "IPv4", 00:07:35.199 "traddr": "10.0.0.2", 00:07:35.199 "trsvcid": "4420", 00:07:35.199 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:35.199 }, 00:07:35.199 "ctrlr_data": { 00:07:35.199 "cntlid": 1, 00:07:35.199 "vendor_id": "0x8086", 00:07:35.199 "model_number": "SPDK bdev Controller", 00:07:35.199 "serial_number": "SPDK0", 00:07:35.199 "firmware_revision": "24.09", 00:07:35.199 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:35.199 "oacs": { 00:07:35.199 "security": 0, 00:07:35.199 "format": 0, 00:07:35.199 "firmware": 0, 00:07:35.199 "ns_manage": 0 00:07:35.199 }, 00:07:35.199 "multi_ctrlr": true, 00:07:35.199 "ana_reporting": false 00:07:35.199 }, 00:07:35.199 "vs": { 00:07:35.199 "nvme_version": "1.3" 00:07:35.199 }, 00:07:35.199 "ns_data": { 00:07:35.199 "id": 1, 00:07:35.199 "can_share": true 00:07:35.199 } 00:07:35.199 } 00:07:35.199 ], 00:07:35.199 "mp_policy": "active_passive" 00:07:35.199 } 00:07:35.199 } 00:07:35.199 ] 00:07:35.199 13:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=64872 00:07:35.199 13:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:35.199 13:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:35.457 Running I/O for 10 seconds... 00:07:36.390 Latency(us) 00:07:36.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.390 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.390 Nvme0n1 : 1.00 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:07:36.390 =================================================================================================================== 00:07:36.390 Total : 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:07:36.390 00:07:37.322 13:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dab11525-2442-43f6-b35a-a6f4f038c232 00:07:37.322 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.322 Nvme0n1 : 2.00 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:07:37.322 =================================================================================================================== 00:07:37.322 Total : 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:07:37.322 00:07:37.579 true 00:07:37.579 13:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:37.579 13:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dab11525-2442-43f6-b35a-a6f4f038c232 00:07:37.836 13:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:37.836 13:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:37.836 13:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 64872 00:07:38.401 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.401 Nvme0n1 : 3.00 7662.33 29.93 0.00 0.00 0.00 0.00 0.00 00:07:38.401 =================================================================================================================== 00:07:38.401 Total : 7662.33 29.93 0.00 0.00 0.00 0.00 0.00 00:07:38.401 00:07:39.334 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.334 Nvme0n1 : 4.00 7651.75 29.89 0.00 0.00 0.00 0.00 0.00 00:07:39.334 =================================================================================================================== 00:07:39.334 Total : 7651.75 29.89 0.00 0.00 0.00 0.00 0.00 00:07:39.334 00:07:40.706 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.706 Nvme0n1 : 5.00 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:07:40.706 =================================================================================================================== 00:07:40.706 Total : 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:07:40.706 00:07:41.639 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.639 Nvme0n1 : 6.00 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:07:41.639 =================================================================================================================== 00:07:41.639 Total : 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:07:41.639 00:07:42.570 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.570 Nvme0n1 : 7.00 7583.71 29.62 0.00 0.00 0.00 0.00 0.00 00:07:42.570 =================================================================================================================== 00:07:42.570 Total : 7583.71 29.62 0.00 0.00 0.00 0.00 0.00 00:07:42.570 00:07:43.505 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.505 Nvme0n1 : 8.00 7540.62 29.46 0.00 0.00 0.00 0.00 0.00 00:07:43.505 =================================================================================================================== 00:07:43.505 Total : 7540.62 29.46 0.00 0.00 0.00 0.00 0.00 00:07:43.505 00:07:44.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.440 Nvme0n1 : 9.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:07:44.440 =================================================================================================================== 00:07:44.440 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:07:44.440 00:07:45.374 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.374 Nvme0n1 : 10.00 7454.90 29.12 0.00 0.00 0.00 0.00 0.00 00:07:45.374 =================================================================================================================== 00:07:45.374 Total : 7454.90 29.12 0.00 0.00 0.00 0.00 0.00 00:07:45.374 00:07:45.374 00:07:45.374 Latency(us) 00:07:45.374 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:45.374 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.374 Nvme0n1 : 10.00 7465.05 29.16 0.00 0.00 17141.33 14954.12 39798.23 00:07:45.374 =================================================================================================================== 00:07:45.374 Total : 7465.05 29.16 0.00 0.00 17141.33 14954.12 39798.23 00:07:45.374 0 00:07:45.374 13:03:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 64850 00:07:45.374 13:03:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 64850 ']' 00:07:45.374 13:03:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 64850 00:07:45.374 13:03:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:07:45.374 13:03:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:45.374 13:03:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64850 00:07:45.374 13:03:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:45.374 13:03:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:45.374 killing process with pid 64850 00:07:45.374 13:03:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64850' 00:07:45.374 13:03:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 64850 00:07:45.374 Received shutdown signal, test time was about 10.000000 seconds 00:07:45.374 00:07:45.374 Latency(us) 00:07:45.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:45.375 =================================================================================================================== 00:07:45.375 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:45.375 13:03:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 64850 00:07:45.633 13:03:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:45.890 13:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:46.146 13:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dab11525-2442-43f6-b35a-a6f4f038c232 00:07:46.146 13:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:46.404 13:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:46.404 13:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:46.404 13:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:46.663 [2024-07-25 13:03:38.752456] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:46.663 13:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dab11525-2442-43f6-b35a-a6f4f038c232 00:07:46.663 13:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:46.663 13:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dab11525-2442-43f6-b35a-a6f4f038c232 00:07:46.663 13:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:46.663 13:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.663 13:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:46.663 13:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.663 13:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:46.663 13:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.663 13:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:46.663 13:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:46.663 13:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dab11525-2442-43f6-b35a-a6f4f038c232 00:07:46.921 request: 00:07:46.921 { 00:07:46.921 "uuid": "dab11525-2442-43f6-b35a-a6f4f038c232", 00:07:46.921 "method": "bdev_lvol_get_lvstores", 00:07:46.921 "req_id": 1 00:07:46.921 } 00:07:46.921 Got JSON-RPC error response 00:07:46.921 response: 00:07:46.921 { 00:07:46.921 "code": -19, 00:07:46.921 "message": "No such device" 00:07:46.921 } 00:07:46.921 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:46.921 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:46.921 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:46.921 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:46.921 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:47.179 aio_bdev 00:07:47.179 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 068c31b9-9084-4e67-ad49-d8558690531b 00:07:47.179 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=068c31b9-9084-4e67-ad49-d8558690531b 00:07:47.179 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:47.179 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:07:47.179 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:47.179 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:47.179 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:47.437 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 068c31b9-9084-4e67-ad49-d8558690531b -t 2000 00:07:47.695 [ 00:07:47.696 { 00:07:47.696 "name": "068c31b9-9084-4e67-ad49-d8558690531b", 00:07:47.696 "aliases": [ 00:07:47.696 "lvs/lvol" 00:07:47.696 ], 00:07:47.696 "product_name": "Logical Volume", 00:07:47.696 "block_size": 4096, 00:07:47.696 "num_blocks": 38912, 00:07:47.696 "uuid": "068c31b9-9084-4e67-ad49-d8558690531b", 00:07:47.696 "assigned_rate_limits": { 00:07:47.696 "rw_ios_per_sec": 0, 00:07:47.696 "rw_mbytes_per_sec": 0, 00:07:47.696 "r_mbytes_per_sec": 0, 00:07:47.696 "w_mbytes_per_sec": 0 00:07:47.696 }, 00:07:47.696 "claimed": false, 00:07:47.696 "zoned": false, 00:07:47.696 "supported_io_types": { 00:07:47.696 "read": true, 00:07:47.696 "write": true, 00:07:47.696 "unmap": true, 00:07:47.696 "flush": false, 00:07:47.696 "reset": true, 00:07:47.696 "nvme_admin": false, 00:07:47.696 "nvme_io": false, 00:07:47.696 "nvme_io_md": false, 00:07:47.696 "write_zeroes": true, 00:07:47.696 "zcopy": false, 00:07:47.696 "get_zone_info": false, 00:07:47.696 "zone_management": false, 00:07:47.696 "zone_append": false, 00:07:47.696 "compare": false, 00:07:47.696 "compare_and_write": false, 00:07:47.696 "abort": false, 00:07:47.696 "seek_hole": true, 00:07:47.696 "seek_data": true, 00:07:47.696 "copy": false, 00:07:47.696 "nvme_iov_md": false 00:07:47.696 }, 00:07:47.696 "driver_specific": { 00:07:47.696 "lvol": { 00:07:47.696 "lvol_store_uuid": "dab11525-2442-43f6-b35a-a6f4f038c232", 00:07:47.696 "base_bdev": "aio_bdev", 00:07:47.696 "thin_provision": false, 00:07:47.696 "num_allocated_clusters": 38, 00:07:47.696 "snapshot": false, 00:07:47.696 "clone": false, 00:07:47.696 "esnap_clone": false 00:07:47.696 } 00:07:47.696 } 00:07:47.696 } 00:07:47.696 ] 00:07:47.696 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:07:47.696 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dab11525-2442-43f6-b35a-a6f4f038c232 00:07:47.696 13:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:47.954 13:03:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:47.954 13:03:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:47.954 13:03:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dab11525-2442-43f6-b35a-a6f4f038c232 00:07:48.212 13:03:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:48.212 13:03:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 068c31b9-9084-4e67-ad49-d8558690531b 00:07:48.469 13:03:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dab11525-2442-43f6-b35a-a6f4f038c232 00:07:48.727 13:03:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:48.985 13:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:49.550 ************************************ 00:07:49.550 END TEST lvs_grow_clean 00:07:49.550 ************************************ 00:07:49.550 00:07:49.550 real 0m18.419s 00:07:49.550 user 0m17.331s 00:07:49.550 sys 0m2.551s 00:07:49.550 13:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.550 13:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:49.550 13:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:49.550 13:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:49.550 13:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.550 13:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:49.550 ************************************ 00:07:49.550 START TEST lvs_grow_dirty 00:07:49.550 ************************************ 00:07:49.550 13:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:07:49.550 13:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:49.550 13:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:49.550 13:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:49.550 13:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:49.550 13:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:49.550 13:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:49.550 13:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:49.550 13:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:49.550 13:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:49.808 13:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:49.808 13:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:50.066 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=96e3d624-c4cc-4a32-a23c-3851f11f7ca6 00:07:50.066 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:50.066 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96e3d624-c4cc-4a32-a23c-3851f11f7ca6 00:07:50.323 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:50.323 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:50.323 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 96e3d624-c4cc-4a32-a23c-3851f11f7ca6 lvol 150 00:07:50.580 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=488a2d86-4416-495a-b5f8-0a0c4407a835 00:07:50.580 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:50.580 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:50.838 [2024-07-25 13:03:42.779733] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:50.838 [2024-07-25 13:03:42.779821] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:50.838 true 00:07:50.838 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96e3d624-c4cc-4a32-a23c-3851f11f7ca6 00:07:50.838 13:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:51.095 13:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:51.095 13:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:51.366 13:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 488a2d86-4416-495a-b5f8-0a0c4407a835 00:07:51.633 13:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:51.633 [2024-07-25 13:03:43.780301] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.633 13:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:51.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:51.891 13:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65119 00:07:51.891 13:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:51.891 13:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:51.891 13:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65119 /var/tmp/bdevperf.sock 00:07:51.891 13:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 65119 ']' 00:07:51.891 13:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:51.891 13:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:51.891 13:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:51.891 13:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:51.891 13:03:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:52.148 [2024-07-25 13:03:44.108127] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:52.149 [2024-07-25 13:03:44.108429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65119 ] 00:07:52.149 [2024-07-25 13:03:44.243783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.407 [2024-07-25 13:03:44.357034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.407 [2024-07-25 13:03:44.411047] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:52.973 13:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:52.973 13:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:52.973 13:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:53.230 Nvme0n1 00:07:53.230 13:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:53.488 [ 00:07:53.488 { 00:07:53.488 "name": "Nvme0n1", 00:07:53.488 "aliases": [ 00:07:53.488 "488a2d86-4416-495a-b5f8-0a0c4407a835" 00:07:53.488 ], 00:07:53.488 "product_name": "NVMe disk", 00:07:53.488 "block_size": 4096, 00:07:53.488 "num_blocks": 38912, 00:07:53.488 "uuid": "488a2d86-4416-495a-b5f8-0a0c4407a835", 00:07:53.488 "assigned_rate_limits": { 00:07:53.488 "rw_ios_per_sec": 0, 00:07:53.488 "rw_mbytes_per_sec": 0, 00:07:53.488 "r_mbytes_per_sec": 0, 00:07:53.488 "w_mbytes_per_sec": 0 00:07:53.488 }, 00:07:53.488 "claimed": false, 00:07:53.488 "zoned": false, 00:07:53.488 "supported_io_types": { 00:07:53.488 "read": true, 00:07:53.488 "write": true, 00:07:53.488 "unmap": true, 00:07:53.488 "flush": true, 00:07:53.488 "reset": true, 00:07:53.488 "nvme_admin": true, 00:07:53.488 "nvme_io": true, 00:07:53.488 "nvme_io_md": false, 00:07:53.488 "write_zeroes": true, 00:07:53.488 "zcopy": false, 00:07:53.488 "get_zone_info": false, 00:07:53.488 "zone_management": false, 00:07:53.488 "zone_append": false, 00:07:53.488 "compare": true, 00:07:53.488 "compare_and_write": true, 00:07:53.488 "abort": true, 00:07:53.488 "seek_hole": false, 00:07:53.488 "seek_data": false, 00:07:53.488 "copy": true, 00:07:53.488 "nvme_iov_md": false 00:07:53.488 }, 00:07:53.488 "memory_domains": [ 00:07:53.488 { 00:07:53.488 "dma_device_id": "system", 00:07:53.488 "dma_device_type": 1 00:07:53.488 } 00:07:53.488 ], 00:07:53.488 "driver_specific": { 00:07:53.488 "nvme": [ 00:07:53.488 { 00:07:53.488 "trid": { 00:07:53.488 "trtype": "TCP", 00:07:53.488 "adrfam": "IPv4", 00:07:53.488 "traddr": "10.0.0.2", 00:07:53.488 "trsvcid": "4420", 00:07:53.488 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:53.488 }, 00:07:53.488 "ctrlr_data": { 00:07:53.488 "cntlid": 1, 00:07:53.488 "vendor_id": "0x8086", 00:07:53.488 "model_number": "SPDK bdev Controller", 00:07:53.488 "serial_number": "SPDK0", 00:07:53.488 "firmware_revision": "24.09", 00:07:53.488 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:53.488 "oacs": { 00:07:53.488 "security": 0, 00:07:53.488 "format": 0, 00:07:53.488 "firmware": 0, 00:07:53.488 "ns_manage": 0 00:07:53.488 }, 00:07:53.488 "multi_ctrlr": true, 00:07:53.488 "ana_reporting": false 00:07:53.488 }, 00:07:53.488 "vs": { 00:07:53.488 "nvme_version": "1.3" 00:07:53.488 }, 00:07:53.488 "ns_data": { 00:07:53.488 "id": 1, 00:07:53.488 "can_share": true 00:07:53.488 } 00:07:53.488 } 00:07:53.489 ], 00:07:53.489 "mp_policy": "active_passive" 00:07:53.489 } 00:07:53.489 } 00:07:53.489 ] 00:07:53.489 13:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65142 00:07:53.489 13:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:53.489 13:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:53.746 Running I/O for 10 seconds... 00:07:54.679 Latency(us) 00:07:54.679 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.679 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.679 Nvme0n1 : 1.00 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:07:54.679 =================================================================================================================== 00:07:54.679 Total : 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:07:54.679 00:07:55.613 13:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 96e3d624-c4cc-4a32-a23c-3851f11f7ca6 00:07:55.613 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.613 Nvme0n1 : 2.00 7683.50 30.01 0.00 0.00 0.00 0.00 0.00 00:07:55.613 =================================================================================================================== 00:07:55.613 Total : 7683.50 30.01 0.00 0.00 0.00 0.00 0.00 00:07:55.613 00:07:55.871 true 00:07:55.871 13:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96e3d624-c4cc-4a32-a23c-3851f11f7ca6 00:07:55.872 13:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:56.130 13:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:56.130 13:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:56.130 13:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 65142 00:07:56.696 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.696 Nvme0n1 : 3.00 7704.67 30.10 0.00 0.00 0.00 0.00 0.00 00:07:56.696 =================================================================================================================== 00:07:56.696 Total : 7704.67 30.10 0.00 0.00 0.00 0.00 0.00 00:07:56.696 00:07:57.631 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.631 Nvme0n1 : 4.00 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:07:57.631 =================================================================================================================== 00:07:57.631 Total : 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:07:57.631 00:07:58.563 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.563 Nvme0n1 : 5.00 7594.60 29.67 0.00 0.00 0.00 0.00 0.00 00:07:58.563 =================================================================================================================== 00:07:58.563 Total : 7594.60 29.67 0.00 0.00 0.00 0.00 0.00 00:07:58.563 00:07:59.936 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.936 Nvme0n1 : 6.00 7577.67 29.60 0.00 0.00 0.00 0.00 0.00 00:07:59.936 =================================================================================================================== 00:07:59.936 Total : 7577.67 29.60 0.00 0.00 0.00 0.00 0.00 00:07:59.936 00:08:00.502 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.502 Nvme0n1 : 7.00 7402.86 28.92 0.00 0.00 0.00 0.00 0.00 00:08:00.502 =================================================================================================================== 00:08:00.502 Total : 7402.86 28.92 0.00 0.00 0.00 0.00 0.00 00:08:00.502 00:08:01.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.878 Nvme0n1 : 8.00 7334.75 28.65 0.00 0.00 0.00 0.00 0.00 00:08:01.878 =================================================================================================================== 00:08:01.878 Total : 7334.75 28.65 0.00 0.00 0.00 0.00 0.00 00:08:01.878 00:08:02.813 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.813 Nvme0n1 : 9.00 7281.78 28.44 0.00 0.00 0.00 0.00 0.00 00:08:02.813 =================================================================================================================== 00:08:02.813 Total : 7281.78 28.44 0.00 0.00 0.00 0.00 0.00 00:08:02.813 00:08:03.749 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.749 Nvme0n1 : 10.00 7252.10 28.33 0.00 0.00 0.00 0.00 0.00 00:08:03.749 =================================================================================================================== 00:08:03.749 Total : 7252.10 28.33 0.00 0.00 0.00 0.00 0.00 00:08:03.749 00:08:03.749 00:08:03.749 Latency(us) 00:08:03.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:03.749 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.749 Nvme0n1 : 10.02 7251.34 28.33 0.00 0.00 17646.80 6553.60 152520.15 00:08:03.749 =================================================================================================================== 00:08:03.749 Total : 7251.34 28.33 0.00 0.00 17646.80 6553.60 152520.15 00:08:03.749 0 00:08:03.749 13:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65119 00:08:03.749 13:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 65119 ']' 00:08:03.749 13:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 65119 00:08:03.749 13:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:03.749 13:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:03.749 13:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65119 00:08:03.749 13:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:03.749 13:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:03.749 killing process with pid 65119 00:08:03.749 13:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65119' 00:08:03.749 13:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 65119 00:08:03.749 Received shutdown signal, test time was about 10.000000 seconds 00:08:03.749 00:08:03.749 Latency(us) 00:08:03.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:03.749 =================================================================================================================== 00:08:03.749 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:03.749 13:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 65119 00:08:04.008 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:04.266 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:04.525 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96e3d624-c4cc-4a32-a23c-3851f11f7ca6 00:08:04.525 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:04.784 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:04.784 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:04.784 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 64762 00:08:04.784 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 64762 00:08:04.784 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 64762 Killed "${NVMF_APP[@]}" "$@" 00:08:04.784 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:04.784 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:04.784 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:04.784 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:04.784 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:04.784 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=65275 00:08:04.784 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:04.784 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 65275 00:08:04.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.784 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 65275 ']' 00:08:04.784 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.784 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:04.784 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.784 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:04.784 13:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:04.784 [2024-07-25 13:03:56.831877] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:04.784 [2024-07-25 13:03:56.832251] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.784 [2024-07-25 13:03:56.967039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.042 [2024-07-25 13:03:57.079386] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:05.042 [2024-07-25 13:03:57.079648] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:05.042 [2024-07-25 13:03:57.079771] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:05.042 [2024-07-25 13:03:57.079785] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:05.042 [2024-07-25 13:03:57.079794] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:05.042 [2024-07-25 13:03:57.079824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.042 [2024-07-25 13:03:57.134105] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:05.978 13:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:05.978 13:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:05.979 13:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:05.979 13:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:05.979 13:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:05.979 13:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:05.979 13:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:05.979 [2024-07-25 13:03:58.135240] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:05.979 [2024-07-25 13:03:58.135500] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:05.979 [2024-07-25 13:03:58.135713] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:06.242 13:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:06.242 13:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 488a2d86-4416-495a-b5f8-0a0c4407a835 00:08:06.242 13:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=488a2d86-4416-495a-b5f8-0a0c4407a835 00:08:06.242 13:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:06.242 13:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:06.242 13:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:06.242 13:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:06.242 13:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:06.520 13:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 488a2d86-4416-495a-b5f8-0a0c4407a835 -t 2000 00:08:06.520 [ 00:08:06.520 { 00:08:06.520 "name": "488a2d86-4416-495a-b5f8-0a0c4407a835", 00:08:06.520 "aliases": [ 00:08:06.520 "lvs/lvol" 00:08:06.520 ], 00:08:06.520 "product_name": "Logical Volume", 00:08:06.520 "block_size": 4096, 00:08:06.520 "num_blocks": 38912, 00:08:06.520 "uuid": "488a2d86-4416-495a-b5f8-0a0c4407a835", 00:08:06.520 "assigned_rate_limits": { 00:08:06.520 "rw_ios_per_sec": 0, 00:08:06.520 "rw_mbytes_per_sec": 0, 00:08:06.520 "r_mbytes_per_sec": 0, 00:08:06.520 "w_mbytes_per_sec": 0 00:08:06.520 }, 00:08:06.520 "claimed": false, 00:08:06.520 "zoned": false, 00:08:06.520 "supported_io_types": { 00:08:06.520 "read": true, 00:08:06.520 "write": true, 00:08:06.520 "unmap": true, 00:08:06.520 "flush": false, 00:08:06.520 "reset": true, 00:08:06.520 "nvme_admin": false, 00:08:06.520 "nvme_io": false, 00:08:06.520 "nvme_io_md": false, 00:08:06.520 "write_zeroes": true, 00:08:06.520 "zcopy": false, 00:08:06.520 "get_zone_info": false, 00:08:06.520 "zone_management": false, 00:08:06.520 "zone_append": false, 00:08:06.520 "compare": false, 00:08:06.520 "compare_and_write": false, 00:08:06.520 "abort": false, 00:08:06.520 "seek_hole": true, 00:08:06.520 "seek_data": true, 00:08:06.520 "copy": false, 00:08:06.520 "nvme_iov_md": false 00:08:06.520 }, 00:08:06.520 "driver_specific": { 00:08:06.520 "lvol": { 00:08:06.520 "lvol_store_uuid": "96e3d624-c4cc-4a32-a23c-3851f11f7ca6", 00:08:06.520 "base_bdev": "aio_bdev", 00:08:06.520 "thin_provision": false, 00:08:06.520 "num_allocated_clusters": 38, 00:08:06.520 "snapshot": false, 00:08:06.520 "clone": false, 00:08:06.520 "esnap_clone": false 00:08:06.520 } 00:08:06.520 } 00:08:06.520 } 00:08:06.520 ] 00:08:06.520 13:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:06.520 13:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:06.520 13:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96e3d624-c4cc-4a32-a23c-3851f11f7ca6 00:08:06.790 13:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:06.790 13:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96e3d624-c4cc-4a32-a23c-3851f11f7ca6 00:08:06.790 13:03:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:07.048 13:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:07.048 13:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:07.306 [2024-07-25 13:03:59.353124] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:07.306 13:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96e3d624-c4cc-4a32-a23c-3851f11f7ca6 00:08:07.306 13:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:07.306 13:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96e3d624-c4cc-4a32-a23c-3851f11f7ca6 00:08:07.306 13:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:07.306 13:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.306 13:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:07.306 13:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.306 13:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:07.306 13:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.306 13:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:07.306 13:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:07.306 13:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96e3d624-c4cc-4a32-a23c-3851f11f7ca6 00:08:07.565 request: 00:08:07.565 { 00:08:07.565 "uuid": "96e3d624-c4cc-4a32-a23c-3851f11f7ca6", 00:08:07.565 "method": "bdev_lvol_get_lvstores", 00:08:07.565 "req_id": 1 00:08:07.565 } 00:08:07.565 Got JSON-RPC error response 00:08:07.565 response: 00:08:07.565 { 00:08:07.565 "code": -19, 00:08:07.565 "message": "No such device" 00:08:07.565 } 00:08:07.565 13:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:07.565 13:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:07.565 13:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:07.565 13:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:07.565 13:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:07.824 aio_bdev 00:08:07.824 13:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 488a2d86-4416-495a-b5f8-0a0c4407a835 00:08:07.824 13:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=488a2d86-4416-495a-b5f8-0a0c4407a835 00:08:07.824 13:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:07.824 13:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:07.824 13:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:07.824 13:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:07.824 13:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:08.083 13:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 488a2d86-4416-495a-b5f8-0a0c4407a835 -t 2000 00:08:08.342 [ 00:08:08.342 { 00:08:08.342 "name": "488a2d86-4416-495a-b5f8-0a0c4407a835", 00:08:08.342 "aliases": [ 00:08:08.342 "lvs/lvol" 00:08:08.342 ], 00:08:08.342 "product_name": "Logical Volume", 00:08:08.342 "block_size": 4096, 00:08:08.342 "num_blocks": 38912, 00:08:08.342 "uuid": "488a2d86-4416-495a-b5f8-0a0c4407a835", 00:08:08.342 "assigned_rate_limits": { 00:08:08.342 "rw_ios_per_sec": 0, 00:08:08.342 "rw_mbytes_per_sec": 0, 00:08:08.342 "r_mbytes_per_sec": 0, 00:08:08.342 "w_mbytes_per_sec": 0 00:08:08.342 }, 00:08:08.342 "claimed": false, 00:08:08.342 "zoned": false, 00:08:08.342 "supported_io_types": { 00:08:08.342 "read": true, 00:08:08.342 "write": true, 00:08:08.342 "unmap": true, 00:08:08.342 "flush": false, 00:08:08.342 "reset": true, 00:08:08.342 "nvme_admin": false, 00:08:08.342 "nvme_io": false, 00:08:08.342 "nvme_io_md": false, 00:08:08.342 "write_zeroes": true, 00:08:08.342 "zcopy": false, 00:08:08.342 "get_zone_info": false, 00:08:08.342 "zone_management": false, 00:08:08.342 "zone_append": false, 00:08:08.342 "compare": false, 00:08:08.342 "compare_and_write": false, 00:08:08.342 "abort": false, 00:08:08.342 "seek_hole": true, 00:08:08.342 "seek_data": true, 00:08:08.342 "copy": false, 00:08:08.342 "nvme_iov_md": false 00:08:08.342 }, 00:08:08.342 "driver_specific": { 00:08:08.342 "lvol": { 00:08:08.342 "lvol_store_uuid": "96e3d624-c4cc-4a32-a23c-3851f11f7ca6", 00:08:08.342 "base_bdev": "aio_bdev", 00:08:08.342 "thin_provision": false, 00:08:08.342 "num_allocated_clusters": 38, 00:08:08.342 "snapshot": false, 00:08:08.342 "clone": false, 00:08:08.342 "esnap_clone": false 00:08:08.342 } 00:08:08.342 } 00:08:08.342 } 00:08:08.342 ] 00:08:08.342 13:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:08.342 13:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96e3d624-c4cc-4a32-a23c-3851f11f7ca6 00:08:08.342 13:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:08.601 13:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:08.601 13:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 96e3d624-c4cc-4a32-a23c-3851f11f7ca6 00:08:08.601 13:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:08.859 13:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:08.859 13:04:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 488a2d86-4416-495a-b5f8-0a0c4407a835 00:08:09.118 13:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 96e3d624-c4cc-4a32-a23c-3851f11f7ca6 00:08:09.376 13:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:09.634 13:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:09.892 ************************************ 00:08:09.892 END TEST lvs_grow_dirty 00:08:09.892 ************************************ 00:08:09.893 00:08:09.893 real 0m20.423s 00:08:09.893 user 0m42.811s 00:08:09.893 sys 0m8.330s 00:08:09.893 13:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.893 13:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:09.893 13:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:09.893 13:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:09.893 13:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:09.893 13:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:09.893 13:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:09.893 13:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:09.893 13:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:09.893 13:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:09.893 13:04:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:09.893 nvmf_trace.0 00:08:09.893 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:09.893 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:09.893 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:09.893 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:10.152 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:10.152 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:10.152 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:10.152 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:10.152 rmmod nvme_tcp 00:08:10.152 rmmod nvme_fabrics 00:08:10.152 rmmod nvme_keyring 00:08:10.152 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:10.152 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:10.152 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:10.152 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 65275 ']' 00:08:10.152 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 65275 00:08:10.152 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 65275 ']' 00:08:10.152 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 65275 00:08:10.152 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:10.152 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:10.152 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65275 00:08:10.152 killing process with pid 65275 00:08:10.152 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:10.152 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:10.152 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65275' 00:08:10.152 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 65275 00:08:10.152 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 65275 00:08:10.411 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:10.411 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:10.411 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:10.411 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:10.411 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:10.411 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.411 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:10.411 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.411 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:10.411 ************************************ 00:08:10.411 END TEST nvmf_lvs_grow 00:08:10.411 ************************************ 00:08:10.411 00:08:10.411 real 0m41.257s 00:08:10.411 user 1m6.411s 00:08:10.411 sys 0m11.573s 00:08:10.411 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.411 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:10.671 ************************************ 00:08:10.671 START TEST nvmf_bdev_io_wait 00:08:10.671 ************************************ 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:10.671 * Looking for test storage... 00:08:10.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:10.671 Cannot find device "nvmf_tgt_br" 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:10.671 Cannot find device "nvmf_tgt_br2" 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:10.671 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:10.672 Cannot find device "nvmf_tgt_br" 00:08:10.672 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:08:10.672 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:10.672 Cannot find device "nvmf_tgt_br2" 00:08:10.672 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:08:10.672 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:10.672 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:10.931 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:10.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:10.931 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:10.931 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:10.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:10.931 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:10.931 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:10.931 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:10.931 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:10.931 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:10.931 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:10.931 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:10.931 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:10.931 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:10.931 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:10.931 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:10.931 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:10.931 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:10.931 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:10.931 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:10.931 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:10.931 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:10.931 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:10.931 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:10.931 13:04:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:10.931 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:10.931 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:10.931 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:10.931 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:10.931 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:10.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:10.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:08:10.931 00:08:10.931 --- 10.0.0.2 ping statistics --- 00:08:10.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.931 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:08:10.931 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:10.931 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:10.931 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:08:10.931 00:08:10.931 --- 10.0.0.3 ping statistics --- 00:08:10.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.931 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:08:10.931 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:10.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:10.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:08:10.931 00:08:10.931 --- 10.0.0.1 ping statistics --- 00:08:10.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.931 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:08:10.931 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:10.931 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:08:10.931 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:10.931 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:10.931 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:10.931 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:10.931 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:10.931 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:10.931 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:10.931 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:10.931 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:10.931 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:10.931 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:10.932 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=65585 00:08:10.932 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:10.932 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 65585 00:08:10.932 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 65585 ']' 00:08:10.932 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.932 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.932 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.932 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.932 13:04:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:11.190 [2024-07-25 13:04:03.149331] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:11.190 [2024-07-25 13:04:03.149429] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.190 [2024-07-25 13:04:03.288528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:11.448 [2024-07-25 13:04:03.389431] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:11.448 [2024-07-25 13:04:03.389497] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:11.448 [2024-07-25 13:04:03.389524] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:11.448 [2024-07-25 13:04:03.389531] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:11.448 [2024-07-25 13:04:03.389538] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:11.448 [2024-07-25 13:04:03.389731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.448 [2024-07-25 13:04:03.389891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:11.448 [2024-07-25 13:04:03.390481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:11.448 [2024-07-25 13:04:03.390524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.014 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:12.014 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:12.014 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:12.014 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:12.014 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:12.014 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.014 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:12.014 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.014 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:12.014 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.014 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:12.014 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.014 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:12.274 [2024-07-25 13:04:04.229643] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:12.274 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.274 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:12.274 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.274 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:12.274 [2024-07-25 13:04:04.246341] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.274 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.274 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:12.274 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.274 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:12.274 Malloc0 00:08:12.274 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.274 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:12.274 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.274 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:12.274 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.274 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:12.274 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.274 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:12.274 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.274 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:12.274 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.274 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:12.274 [2024-07-25 13:04:04.311483] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.274 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.274 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=65626 00:08:12.274 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:12.274 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:12.274 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=65628 00:08:12.274 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:12.274 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:12.274 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:12.274 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:12.274 { 00:08:12.274 "params": { 00:08:12.274 "name": "Nvme$subsystem", 00:08:12.274 "trtype": "$TEST_TRANSPORT", 00:08:12.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:12.274 "adrfam": "ipv4", 00:08:12.274 "trsvcid": "$NVMF_PORT", 00:08:12.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:12.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:12.275 "hdgst": ${hdgst:-false}, 00:08:12.275 "ddgst": ${ddgst:-false} 00:08:12.275 }, 00:08:12.275 "method": "bdev_nvme_attach_controller" 00:08:12.275 } 00:08:12.275 EOF 00:08:12.275 )") 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=65630 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:12.275 { 00:08:12.275 "params": { 00:08:12.275 "name": "Nvme$subsystem", 00:08:12.275 "trtype": "$TEST_TRANSPORT", 00:08:12.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:12.275 "adrfam": "ipv4", 00:08:12.275 "trsvcid": "$NVMF_PORT", 00:08:12.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:12.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:12.275 "hdgst": ${hdgst:-false}, 00:08:12.275 "ddgst": ${ddgst:-false} 00:08:12.275 }, 00:08:12.275 "method": "bdev_nvme_attach_controller" 00:08:12.275 } 00:08:12.275 EOF 00:08:12.275 )") 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=65632 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:12.275 { 00:08:12.275 "params": { 00:08:12.275 "name": "Nvme$subsystem", 00:08:12.275 "trtype": "$TEST_TRANSPORT", 00:08:12.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:12.275 "adrfam": "ipv4", 00:08:12.275 "trsvcid": "$NVMF_PORT", 00:08:12.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:12.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:12.275 "hdgst": ${hdgst:-false}, 00:08:12.275 "ddgst": ${ddgst:-false} 00:08:12.275 }, 00:08:12.275 "method": "bdev_nvme_attach_controller" 00:08:12.275 } 00:08:12.275 EOF 00:08:12.275 )") 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:12.275 { 00:08:12.275 "params": { 00:08:12.275 "name": "Nvme$subsystem", 00:08:12.275 "trtype": "$TEST_TRANSPORT", 00:08:12.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:12.275 "adrfam": "ipv4", 00:08:12.275 "trsvcid": "$NVMF_PORT", 00:08:12.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:12.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:12.275 "hdgst": ${hdgst:-false}, 00:08:12.275 "ddgst": ${ddgst:-false} 00:08:12.275 }, 00:08:12.275 "method": "bdev_nvme_attach_controller" 00:08:12.275 } 00:08:12.275 EOF 00:08:12.275 )") 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:12.275 "params": { 00:08:12.275 "name": "Nvme1", 00:08:12.275 "trtype": "tcp", 00:08:12.275 "traddr": "10.0.0.2", 00:08:12.275 "adrfam": "ipv4", 00:08:12.275 "trsvcid": "4420", 00:08:12.275 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:12.275 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:12.275 "hdgst": false, 00:08:12.275 "ddgst": false 00:08:12.275 }, 00:08:12.275 "method": "bdev_nvme_attach_controller" 00:08:12.275 }' 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:12.275 "params": { 00:08:12.275 "name": "Nvme1", 00:08:12.275 "trtype": "tcp", 00:08:12.275 "traddr": "10.0.0.2", 00:08:12.275 "adrfam": "ipv4", 00:08:12.275 "trsvcid": "4420", 00:08:12.275 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:12.275 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:12.275 "hdgst": false, 00:08:12.275 "ddgst": false 00:08:12.275 }, 00:08:12.275 "method": "bdev_nvme_attach_controller" 00:08:12.275 }' 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:12.275 "params": { 00:08:12.275 "name": "Nvme1", 00:08:12.275 "trtype": "tcp", 00:08:12.275 "traddr": "10.0.0.2", 00:08:12.275 "adrfam": "ipv4", 00:08:12.275 "trsvcid": "4420", 00:08:12.275 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:12.275 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:12.275 "hdgst": false, 00:08:12.275 "ddgst": false 00:08:12.275 }, 00:08:12.275 "method": "bdev_nvme_attach_controller" 00:08:12.275 }' 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:12.275 "params": { 00:08:12.275 "name": "Nvme1", 00:08:12.275 "trtype": "tcp", 00:08:12.275 "traddr": "10.0.0.2", 00:08:12.275 "adrfam": "ipv4", 00:08:12.275 "trsvcid": "4420", 00:08:12.275 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:12.275 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:12.275 "hdgst": false, 00:08:12.275 "ddgst": false 00:08:12.275 }, 00:08:12.275 "method": "bdev_nvme_attach_controller" 00:08:12.275 }' 00:08:12.275 13:04:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 65626 00:08:12.275 [2024-07-25 13:04:04.374800] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:12.275 [2024-07-25 13:04:04.374808] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:12.275 [2024-07-25 13:04:04.374887] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:12.275 [2024-07-25 13:04:04.375941] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:12.275 [2024-07-25 13:04:04.379763] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:12.275 [2024-07-25 13:04:04.379821] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:12.275 [2024-07-25 13:04:04.404791] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:12.275 [2024-07-25 13:04:04.404875] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:12.536 [2024-07-25 13:04:04.592350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.536 [2024-07-25 13:04:04.670295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.536 [2024-07-25 13:04:04.689323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:12.795 [2024-07-25 13:04:04.737156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.795 [2024-07-25 13:04:04.737966] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:12.795 [2024-07-25 13:04:04.767243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:12.795 [2024-07-25 13:04:04.814189] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:12.795 [2024-07-25 13:04:04.816392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.795 [2024-07-25 13:04:04.821957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:12.795 Running I/O for 1 seconds... 00:08:12.795 [2024-07-25 13:04:04.865801] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:12.795 Running I/O for 1 seconds... 00:08:12.795 [2024-07-25 13:04:04.912304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:12.795 [2024-07-25 13:04:04.959191] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:12.795 Running I/O for 1 seconds... 00:08:13.054 Running I/O for 1 seconds... 00:08:13.989 00:08:13.989 Latency(us) 00:08:13.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:13.989 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:13.989 Nvme1n1 : 1.02 6466.94 25.26 0.00 0.00 19501.70 6345.08 33363.78 00:08:13.989 =================================================================================================================== 00:08:13.989 Total : 6466.94 25.26 0.00 0.00 19501.70 6345.08 33363.78 00:08:13.989 00:08:13.989 Latency(us) 00:08:13.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:13.989 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:13.989 Nvme1n1 : 1.01 6402.54 25.01 0.00 0.00 19928.68 5630.14 40751.48 00:08:13.989 =================================================================================================================== 00:08:13.989 Total : 6402.54 25.01 0.00 0.00 19928.68 5630.14 40751.48 00:08:13.989 00:08:13.989 Latency(us) 00:08:13.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:13.989 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:13.989 Nvme1n1 : 1.01 7218.19 28.20 0.00 0.00 17623.09 10843.23 28716.68 00:08:13.989 =================================================================================================================== 00:08:13.989 Total : 7218.19 28.20 0.00 0.00 17623.09 10843.23 28716.68 00:08:13.989 00:08:13.989 Latency(us) 00:08:13.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:13.989 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:13.989 Nvme1n1 : 1.00 175308.44 684.80 0.00 0.00 727.48 376.09 1146.88 00:08:13.989 =================================================================================================================== 00:08:13.989 Total : 175308.44 684.80 0.00 0.00 727.48 376.09 1146.88 00:08:13.989 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 65628 00:08:14.248 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 65630 00:08:14.248 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 65632 00:08:14.248 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:14.248 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.248 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.248 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.248 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:14.248 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:14.248 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:14.248 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:08:14.248 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:14.248 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:08:14.248 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:14.248 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:14.248 rmmod nvme_tcp 00:08:14.248 rmmod nvme_fabrics 00:08:14.248 rmmod nvme_keyring 00:08:14.248 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:14.248 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:08:14.248 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:08:14.248 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 65585 ']' 00:08:14.248 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 65585 00:08:14.248 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 65585 ']' 00:08:14.248 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 65585 00:08:14.248 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:14.248 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:14.248 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65585 00:08:14.248 killing process with pid 65585 00:08:14.248 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:14.249 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:14.249 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65585' 00:08:14.249 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 65585 00:08:14.249 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 65585 00:08:14.508 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:14.508 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:14.508 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:14.508 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:14.508 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:14.508 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.508 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.508 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.508 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:14.508 00:08:14.508 real 0m4.072s 00:08:14.508 user 0m17.861s 00:08:14.508 sys 0m2.151s 00:08:14.508 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.508 ************************************ 00:08:14.508 END TEST nvmf_bdev_io_wait 00:08:14.508 ************************************ 00:08:14.508 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.767 13:04:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:14.767 13:04:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:14.767 13:04:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:14.767 13:04:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:14.767 ************************************ 00:08:14.767 START TEST nvmf_queue_depth 00:08:14.767 ************************************ 00:08:14.767 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:14.767 * Looking for test storage... 00:08:14.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:14.767 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:14.767 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:14.767 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:14.768 Cannot find device "nvmf_tgt_br" 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:14.768 Cannot find device "nvmf_tgt_br2" 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:14.768 Cannot find device "nvmf_tgt_br" 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:14.768 Cannot find device "nvmf_tgt_br2" 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:14.768 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:15.028 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:15.028 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:15.028 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:15.028 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:15.028 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:15.028 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:15.028 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:15.028 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:15.028 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:15.028 13:04:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:15.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:08:15.028 00:08:15.028 --- 10.0.0.2 ping statistics --- 00:08:15.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.028 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:15.028 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:15.028 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:08:15.028 00:08:15.028 --- 10.0.0.3 ping statistics --- 00:08:15.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.028 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:15.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:08:15.028 00:08:15.028 --- 10.0.0.1 ping statistics --- 00:08:15.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.028 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=65860 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 65860 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 65860 ']' 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:15.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:15.028 13:04:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:15.028 [2024-07-25 13:04:07.210987] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:15.028 [2024-07-25 13:04:07.211079] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.287 [2024-07-25 13:04:07.345967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.287 [2024-07-25 13:04:07.438483] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.287 [2024-07-25 13:04:07.438578] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.287 [2024-07-25 13:04:07.438607] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.287 [2024-07-25 13:04:07.438616] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.287 [2024-07-25 13:04:07.438623] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.287 [2024-07-25 13:04:07.438657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.546 [2024-07-25 13:04:07.494082] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.113 [2024-07-25 13:04:08.189533] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.113 Malloc0 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.113 [2024-07-25 13:04:08.252796] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=65898 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 65898 /var/tmp/bdevperf.sock 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 65898 ']' 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:16.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:16.113 13:04:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.372 [2024-07-25 13:04:08.309951] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:16.372 [2024-07-25 13:04:08.310043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65898 ] 00:08:16.372 [2024-07-25 13:04:08.452090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.630 [2024-07-25 13:04:08.565788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.630 [2024-07-25 13:04:08.624557] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:17.195 13:04:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:17.195 13:04:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:17.195 13:04:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:17.195 13:04:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.195 13:04:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:17.452 NVMe0n1 00:08:17.452 13:04:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.452 13:04:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:17.452 Running I/O for 10 seconds... 00:08:29.653 00:08:29.653 Latency(us) 00:08:29.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.653 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:29.653 Verification LBA range: start 0x0 length 0x4000 00:08:29.653 NVMe0n1 : 10.07 8488.38 33.16 0.00 0.00 120052.27 17158.52 98184.84 00:08:29.653 =================================================================================================================== 00:08:29.653 Total : 8488.38 33.16 0.00 0.00 120052.27 17158.52 98184.84 00:08:29.653 0 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 65898 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 65898 ']' 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 65898 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65898 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:29.653 killing process with pid 65898 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65898' 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 65898 00:08:29.653 Received shutdown signal, test time was about 10.000000 seconds 00:08:29.653 00:08:29.653 Latency(us) 00:08:29.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.653 =================================================================================================================== 00:08:29.653 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 65898 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:29.653 rmmod nvme_tcp 00:08:29.653 rmmod nvme_fabrics 00:08:29.653 rmmod nvme_keyring 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 65860 ']' 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 65860 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 65860 ']' 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 65860 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65860 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:29.653 killing process with pid 65860 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65860' 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 65860 00:08:29.653 13:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 65860 00:08:29.653 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:29.653 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:29.653 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:29.653 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:29.653 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:29.653 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.653 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.653 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.653 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:29.653 00:08:29.653 real 0m13.520s 00:08:29.653 user 0m23.461s 00:08:29.653 sys 0m2.220s 00:08:29.653 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.653 ************************************ 00:08:29.653 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:29.653 END TEST nvmf_queue_depth 00:08:29.653 ************************************ 00:08:29.653 13:04:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:29.653 13:04:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:29.653 13:04:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.653 13:04:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:29.653 ************************************ 00:08:29.653 START TEST nvmf_target_multipath 00:08:29.653 ************************************ 00:08:29.653 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:29.653 * Looking for test storage... 00:08:29.653 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:29.653 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:29.653 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:29.653 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:29.654 Cannot find device "nvmf_tgt_br" 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:29.654 Cannot find device "nvmf_tgt_br2" 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:29.654 Cannot find device "nvmf_tgt_br" 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:29.654 Cannot find device "nvmf_tgt_br2" 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:29.654 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:29.654 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:29.654 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:29.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:29.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:08:29.655 00:08:29.655 --- 10.0.0.2 ping statistics --- 00:08:29.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.655 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:29.655 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:29.655 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:08:29.655 00:08:29.655 --- 10.0.0.3 ping statistics --- 00:08:29.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.655 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:29.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:29.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:08:29.655 00:08:29.655 --- 10.0.0.1 ping statistics --- 00:08:29.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.655 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=66212 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 66212 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 66212 ']' 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:29.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:29.655 13:04:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:29.655 [2024-07-25 13:04:20.843766] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:29.655 [2024-07-25 13:04:20.844426] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.655 [2024-07-25 13:04:20.983217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:29.655 [2024-07-25 13:04:21.091659] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.655 [2024-07-25 13:04:21.091723] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.655 [2024-07-25 13:04:21.091750] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.655 [2024-07-25 13:04:21.091758] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.655 [2024-07-25 13:04:21.091765] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.655 [2024-07-25 13:04:21.091918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.655 [2024-07-25 13:04:21.092076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:29.655 [2024-07-25 13:04:21.092570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:29.655 [2024-07-25 13:04:21.092606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.655 [2024-07-25 13:04:21.148626] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:29.655 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.655 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:08:29.655 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:29.655 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:29.655 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:29.914 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.914 13:04:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:30.173 [2024-07-25 13:04:22.133514] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.173 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:30.431 Malloc0 00:08:30.432 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:30.690 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:30.948 13:04:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.948 [2024-07-25 13:04:23.099454] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.948 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:31.206 [2024-07-25 13:04:23.315647] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:31.206 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid=0dc0e430-4c84-41eb-b0bf-db443f090fb1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:08:31.465 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid=0dc0e430-4c84-41eb-b0bf-db443f090fb1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:31.465 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:31.465 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:08:31.465 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:31.465 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:31.465 13:04:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:08:33.994 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:33.994 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:33.994 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:33.994 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:33.994 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:33.994 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:08:33.995 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:33.995 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:33.995 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:33.995 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:33.995 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:33.995 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:33.995 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:33.995 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:33.995 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:33.995 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:33.995 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:33.995 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:33.995 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:33.995 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:33.995 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:33.995 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:33.995 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:33.995 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:33.995 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:33.995 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:33.995 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:33.995 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:33.995 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:33.995 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:33.995 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:33.995 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:33.995 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=66307 00:08:33.995 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:33.995 13:04:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:33.995 [global] 00:08:33.995 thread=1 00:08:33.995 invalidate=1 00:08:33.995 rw=randrw 00:08:33.995 time_based=1 00:08:33.995 runtime=6 00:08:33.995 ioengine=libaio 00:08:33.995 direct=1 00:08:33.995 bs=4096 00:08:33.995 iodepth=128 00:08:33.995 norandommap=0 00:08:33.995 numjobs=1 00:08:33.995 00:08:33.995 verify_dump=1 00:08:33.995 verify_backlog=512 00:08:33.995 verify_state_save=0 00:08:33.995 do_verify=1 00:08:33.995 verify=crc32c-intel 00:08:33.995 [job0] 00:08:33.995 filename=/dev/nvme0n1 00:08:33.995 Could not set queue depth (nvme0n1) 00:08:33.995 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:33.995 fio-3.35 00:08:33.995 Starting 1 thread 00:08:34.562 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:08:34.821 13:04:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:35.079 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:35.079 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:35.079 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:35.079 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:35.079 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:35.079 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:35.079 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:35.079 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:35.079 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:35.079 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:35.079 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:35.079 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:35.079 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:08:35.336 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:35.597 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:35.597 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:35.597 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:35.597 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:35.597 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:35.597 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:35.597 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:35.597 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:35.597 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:35.597 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:35.597 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:35.597 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:35.597 13:04:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 66307 00:08:39.782 00:08:39.782 job0: (groupid=0, jobs=1): err= 0: pid=66328: Thu Jul 25 13:04:31 2024 00:08:39.782 read: IOPS=8685, BW=33.9MiB/s (35.6MB/s)(204MiB/6008msec) 00:08:39.782 slat (usec): min=7, max=7375, avg=69.64, stdev=270.57 00:08:39.782 clat (usec): min=2186, max=19881, avg=10137.51, stdev=1791.43 00:08:39.782 lat (usec): min=2198, max=19895, avg=10207.15, stdev=1796.22 00:08:39.782 clat percentiles (usec): 00:08:39.782 | 1.00th=[ 5276], 5.00th=[ 7635], 10.00th=[ 8455], 20.00th=[ 9110], 00:08:39.782 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10290], 00:08:39.782 | 70.00th=[10552], 80.00th=[10945], 90.00th=[11863], 95.00th=[13829], 00:08:39.782 | 99.00th=[16057], 99.50th=[16581], 99.90th=[17957], 99.95th=[18482], 00:08:39.782 | 99.99th=[18744] 00:08:39.782 bw ( KiB/s): min= 3248, max=24576, per=50.38%, avg=17504.00, stdev=6927.38, samples=12 00:08:39.782 iops : min= 812, max= 6144, avg=4376.00, stdev=1731.85, samples=12 00:08:39.782 write: IOPS=5238, BW=20.5MiB/s (21.5MB/s)(103MiB/5032msec); 0 zone resets 00:08:39.782 slat (usec): min=15, max=3839, avg=78.14, stdev=197.87 00:08:39.782 clat (usec): min=3175, max=22746, avg=8836.70, stdev=1662.83 00:08:39.782 lat (usec): min=3208, max=22772, avg=8914.84, stdev=1670.05 00:08:39.782 clat percentiles (usec): 00:08:39.782 | 1.00th=[ 3884], 5.00th=[ 5538], 10.00th=[ 7046], 20.00th=[ 7963], 00:08:39.782 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9241], 00:08:39.782 | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[10421], 95.00th=[11076], 00:08:39.782 | 99.00th=[13566], 99.50th=[14615], 99.90th=[16909], 99.95th=[17171], 00:08:39.782 | 99.99th=[17957] 00:08:39.782 bw ( KiB/s): min= 3488, max=24240, per=83.66%, avg=17532.00, stdev=6781.80, samples=12 00:08:39.782 iops : min= 872, max= 6060, avg=4383.00, stdev=1695.45, samples=12 00:08:39.782 lat (msec) : 4=0.51%, 10=59.89%, 20=39.59%, 50=0.01% 00:08:39.782 cpu : usr=4.81%, sys=20.24%, ctx=4636, majf=0, minf=121 00:08:39.782 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:08:39.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:39.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:39.782 issued rwts: total=52181,26362,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:39.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:39.782 00:08:39.782 Run status group 0 (all jobs): 00:08:39.782 READ: bw=33.9MiB/s (35.6MB/s), 33.9MiB/s-33.9MiB/s (35.6MB/s-35.6MB/s), io=204MiB (214MB), run=6008-6008msec 00:08:39.782 WRITE: bw=20.5MiB/s (21.5MB/s), 20.5MiB/s-20.5MiB/s (21.5MB/s-21.5MB/s), io=103MiB (108MB), run=5032-5032msec 00:08:39.782 00:08:39.782 Disk stats (read/write): 00:08:39.782 nvme0n1: ios=51450/25862, merge=0/0, ticks=501156/214406, in_queue=715562, util=98.53% 00:08:39.782 13:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:08:40.040 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:40.299 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:40.299 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:40.299 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:40.299 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:40.299 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:40.299 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:40.299 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:40.299 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:40.299 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:40.299 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:40.299 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:40.299 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:40.299 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:08:40.299 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:40.299 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=66403 00:08:40.299 13:04:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:08:40.299 [global] 00:08:40.299 thread=1 00:08:40.299 invalidate=1 00:08:40.299 rw=randrw 00:08:40.299 time_based=1 00:08:40.299 runtime=6 00:08:40.299 ioengine=libaio 00:08:40.299 direct=1 00:08:40.299 bs=4096 00:08:40.299 iodepth=128 00:08:40.299 norandommap=0 00:08:40.299 numjobs=1 00:08:40.299 00:08:40.299 verify_dump=1 00:08:40.299 verify_backlog=512 00:08:40.299 verify_state_save=0 00:08:40.299 do_verify=1 00:08:40.299 verify=crc32c-intel 00:08:40.299 [job0] 00:08:40.299 filename=/dev/nvme0n1 00:08:40.558 Could not set queue depth (nvme0n1) 00:08:40.558 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:40.558 fio-3.35 00:08:40.558 Starting 1 thread 00:08:41.492 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:08:41.751 13:04:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:42.009 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:42.009 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:42.009 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:42.009 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:42.009 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:42.009 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:42.009 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:42.009 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:42.009 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:42.009 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:42.009 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:42.009 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:42.009 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:08:42.269 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:42.528 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:08:42.528 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:42.528 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:42.528 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:42.528 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:42.528 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:42.528 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:08:42.528 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:42.528 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:42.528 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:42.528 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:42.528 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:42.528 13:04:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 66403 00:08:46.716 00:08:46.716 job0: (groupid=0, jobs=1): err= 0: pid=66424: Thu Jul 25 13:04:38 2024 00:08:46.716 read: IOPS=9594, BW=37.5MiB/s (39.3MB/s)(225MiB/6006msec) 00:08:46.716 slat (usec): min=8, max=7149, avg=53.75, stdev=231.87 00:08:46.716 clat (usec): min=413, max=24197, avg=9174.52, stdev=2752.11 00:08:46.716 lat (usec): min=433, max=24207, avg=9228.28, stdev=2763.13 00:08:46.716 clat percentiles (usec): 00:08:46.716 | 1.00th=[ 2343], 5.00th=[ 4621], 10.00th=[ 5735], 20.00th=[ 7373], 00:08:46.716 | 30.00th=[ 8094], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[ 9765], 00:08:46.716 | 70.00th=[10290], 80.00th=[10945], 90.00th=[12125], 95.00th=[13960], 00:08:46.716 | 99.00th=[17171], 99.50th=[19006], 99.90th=[21365], 99.95th=[22152], 00:08:46.716 | 99.99th=[23462] 00:08:46.716 bw ( KiB/s): min= 6392, max=33488, per=51.57%, avg=19792.00, stdev=6363.12, samples=11 00:08:46.716 iops : min= 1598, max= 8372, avg=4948.00, stdev=1590.78, samples=11 00:08:46.716 write: IOPS=5640, BW=22.0MiB/s (23.1MB/s)(118MiB/5361msec); 0 zone resets 00:08:46.716 slat (usec): min=15, max=2912, avg=62.59, stdev=160.80 00:08:46.716 clat (usec): min=447, max=20271, avg=7730.34, stdev=2438.34 00:08:46.716 lat (usec): min=485, max=20314, avg=7792.93, stdev=2452.12 00:08:46.716 clat percentiles (usec): 00:08:46.716 | 1.00th=[ 2409], 5.00th=[ 3490], 10.00th=[ 4146], 20.00th=[ 5276], 00:08:46.716 | 30.00th=[ 6849], 40.00th=[ 7504], 50.00th=[ 8029], 60.00th=[ 8586], 00:08:46.716 | 70.00th=[ 9110], 80.00th=[ 9634], 90.00th=[10421], 95.00th=[11076], 00:08:46.716 | 99.00th=[13566], 99.50th=[14746], 99.90th=[17957], 99.95th=[19268], 00:08:46.716 | 99.99th=[20055] 00:08:46.716 bw ( KiB/s): min= 6824, max=34496, per=87.94%, avg=19843.64, stdev=6459.73, samples=11 00:08:46.716 iops : min= 1706, max= 8624, avg=4960.91, stdev=1614.93, samples=11 00:08:46.716 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.09% 00:08:46.716 lat (msec) : 2=0.52%, 4=4.51%, 10=66.57%, 20=28.02%, 50=0.23% 00:08:46.716 cpu : usr=5.80%, sys=21.37%, ctx=5102, majf=0, minf=78 00:08:46.716 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:08:46.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:46.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:46.716 issued rwts: total=57622,30241,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:46.716 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:46.716 00:08:46.716 Run status group 0 (all jobs): 00:08:46.716 READ: bw=37.5MiB/s (39.3MB/s), 37.5MiB/s-37.5MiB/s (39.3MB/s-39.3MB/s), io=225MiB (236MB), run=6006-6006msec 00:08:46.716 WRITE: bw=22.0MiB/s (23.1MB/s), 22.0MiB/s-22.0MiB/s (23.1MB/s-23.1MB/s), io=118MiB (124MB), run=5361-5361msec 00:08:46.716 00:08:46.716 Disk stats (read/write): 00:08:46.716 nvme0n1: ios=56845/29415, merge=0/0, ticks=500441/213702, in_queue=714143, util=98.61% 00:08:46.716 13:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:46.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:46.974 13:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:46.975 13:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:08:46.975 13:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:46.975 13:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:46.975 13:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:46.975 13:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:46.975 13:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:08:46.975 13:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:47.232 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:08:47.232 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:08:47.232 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:08:47.232 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:08:47.232 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:47.232 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:08:47.233 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:47.233 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:08:47.233 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:47.233 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:47.233 rmmod nvme_tcp 00:08:47.233 rmmod nvme_fabrics 00:08:47.233 rmmod nvme_keyring 00:08:47.233 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:47.233 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:08:47.233 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:08:47.233 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 66212 ']' 00:08:47.233 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 66212 00:08:47.233 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 66212 ']' 00:08:47.233 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 66212 00:08:47.233 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:08:47.233 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:47.233 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66212 00:08:47.233 killing process with pid 66212 00:08:47.233 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:47.233 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:47.233 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66212' 00:08:47.233 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 66212 00:08:47.233 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 66212 00:08:47.491 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:47.491 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:47.491 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:47.491 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:47.491 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:47.491 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.491 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.491 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.491 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:47.491 ************************************ 00:08:47.491 END TEST nvmf_target_multipath 00:08:47.491 ************************************ 00:08:47.491 00:08:47.491 real 0m19.354s 00:08:47.491 user 1m12.090s 00:08:47.491 sys 0m9.655s 00:08:47.491 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.491 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:47.750 ************************************ 00:08:47.750 START TEST nvmf_zcopy 00:08:47.750 ************************************ 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:47.750 * Looking for test storage... 00:08:47.750 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:47.750 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:47.751 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:47.751 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:47.751 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:47.751 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:47.751 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:47.751 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:47.751 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:47.751 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:47.751 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:47.751 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:47.751 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:47.751 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:47.751 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:47.751 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:47.751 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:47.751 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:47.751 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:47.751 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:47.751 Cannot find device "nvmf_tgt_br" 00:08:47.751 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:08:47.751 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:47.751 Cannot find device "nvmf_tgt_br2" 00:08:47.751 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:08:47.751 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:47.751 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:47.751 Cannot find device "nvmf_tgt_br" 00:08:47.751 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:08:47.751 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:47.751 Cannot find device "nvmf_tgt_br2" 00:08:47.751 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:08:47.751 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:47.751 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:48.009 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:48.009 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:48.009 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:08:48.009 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:48.009 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:48.009 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:08:48.009 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:48.009 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:48.009 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:48.009 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:48.010 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:48.010 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:48.010 13:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:48.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:08:48.010 00:08:48.010 --- 10.0.0.2 ping statistics --- 00:08:48.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.010 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:48.010 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:48.010 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:08:48.010 00:08:48.010 --- 10.0.0.3 ping statistics --- 00:08:48.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.010 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:48.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:08:48.010 00:08:48.010 --- 10.0.0.1 ping statistics --- 00:08:48.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.010 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=66674 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 66674 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 66674 ']' 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:48.010 13:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:48.010 [2024-07-25 13:04:40.195485] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:48.010 [2024-07-25 13:04:40.195581] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.268 [2024-07-25 13:04:40.338146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.268 [2024-07-25 13:04:40.444452] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.268 [2024-07-25 13:04:40.444515] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.268 [2024-07-25 13:04:40.444528] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.268 [2024-07-25 13:04:40.444537] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.268 [2024-07-25 13:04:40.444545] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.268 [2024-07-25 13:04:40.444574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.527 [2024-07-25 13:04:40.500359] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.094 [2024-07-25 13:04:41.195382] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.094 [2024-07-25 13:04:41.215399] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.094 malloc0 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:49.094 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:49.094 { 00:08:49.094 "params": { 00:08:49.094 "name": "Nvme$subsystem", 00:08:49.094 "trtype": "$TEST_TRANSPORT", 00:08:49.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:49.094 "adrfam": "ipv4", 00:08:49.094 "trsvcid": "$NVMF_PORT", 00:08:49.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:49.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:49.094 "hdgst": ${hdgst:-false}, 00:08:49.095 "ddgst": ${ddgst:-false} 00:08:49.095 }, 00:08:49.095 "method": "bdev_nvme_attach_controller" 00:08:49.095 } 00:08:49.095 EOF 00:08:49.095 )") 00:08:49.095 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:08:49.095 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:08:49.095 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:08:49.095 13:04:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:49.095 "params": { 00:08:49.095 "name": "Nvme1", 00:08:49.095 "trtype": "tcp", 00:08:49.095 "traddr": "10.0.0.2", 00:08:49.095 "adrfam": "ipv4", 00:08:49.095 "trsvcid": "4420", 00:08:49.095 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:49.095 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:49.095 "hdgst": false, 00:08:49.095 "ddgst": false 00:08:49.095 }, 00:08:49.095 "method": "bdev_nvme_attach_controller" 00:08:49.095 }' 00:08:49.353 [2024-07-25 13:04:41.311185] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:49.353 [2024-07-25 13:04:41.311305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66707 ] 00:08:49.353 [2024-07-25 13:04:41.453695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.611 [2024-07-25 13:04:41.581813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.611 [2024-07-25 13:04:41.651041] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:49.611 Running I/O for 10 seconds... 00:09:01.813 00:09:01.813 Latency(us) 00:09:01.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.813 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:01.813 Verification LBA range: start 0x0 length 0x1000 00:09:01.813 Nvme1n1 : 10.02 6185.34 48.32 0.00 0.00 20628.69 2651.23 33363.78 00:09:01.813 =================================================================================================================== 00:09:01.813 Total : 6185.34 48.32 0.00 0.00 20628.69 2651.23 33363.78 00:09:01.813 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=66823 00:09:01.813 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:01.813 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:01.813 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:01.813 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:01.813 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:01.813 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:01.813 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:01.813 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:01.813 { 00:09:01.813 "params": { 00:09:01.813 "name": "Nvme$subsystem", 00:09:01.813 "trtype": "$TEST_TRANSPORT", 00:09:01.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:01.813 "adrfam": "ipv4", 00:09:01.813 "trsvcid": "$NVMF_PORT", 00:09:01.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:01.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:01.813 "hdgst": ${hdgst:-false}, 00:09:01.813 "ddgst": ${ddgst:-false} 00:09:01.813 }, 00:09:01.813 "method": "bdev_nvme_attach_controller" 00:09:01.813 } 00:09:01.813 EOF 00:09:01.813 )") 00:09:01.813 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:01.813 [2024-07-25 13:04:52.014098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.813 [2024-07-25 13:04:52.014146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.813 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:01.813 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:01.813 13:04:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:01.813 "params": { 00:09:01.813 "name": "Nvme1", 00:09:01.813 "trtype": "tcp", 00:09:01.813 "traddr": "10.0.0.2", 00:09:01.813 "adrfam": "ipv4", 00:09:01.813 "trsvcid": "4420", 00:09:01.813 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:01.813 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:01.813 "hdgst": false, 00:09:01.813 "ddgst": false 00:09:01.813 }, 00:09:01.813 "method": "bdev_nvme_attach_controller" 00:09:01.813 }' 00:09:01.813 [2024-07-25 13:04:52.026056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.813 [2024-07-25 13:04:52.026089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.813 [2024-07-25 13:04:52.038084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.813 [2024-07-25 13:04:52.038114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.813 [2024-07-25 13:04:52.050065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.813 [2024-07-25 13:04:52.050094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.813 [2024-07-25 13:04:52.062069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.813 [2024-07-25 13:04:52.062097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.813 [2024-07-25 13:04:52.064933] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:01.813 [2024-07-25 13:04:52.065016] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66823 ] 00:09:01.813 [2024-07-25 13:04:52.074079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.813 [2024-07-25 13:04:52.074107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.813 [2024-07-25 13:04:52.086095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.813 [2024-07-25 13:04:52.086122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.813 [2024-07-25 13:04:52.098085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.813 [2024-07-25 13:04:52.098112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.813 [2024-07-25 13:04:52.110087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.813 [2024-07-25 13:04:52.110113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.813 [2024-07-25 13:04:52.122121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.813 [2024-07-25 13:04:52.122156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.813 [2024-07-25 13:04:52.134114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.813 [2024-07-25 13:04:52.134170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.813 [2024-07-25 13:04:52.150152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.813 [2024-07-25 13:04:52.150183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.813 [2024-07-25 13:04:52.158129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.813 [2024-07-25 13:04:52.158158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.813 [2024-07-25 13:04:52.170129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.813 [2024-07-25 13:04:52.170183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.813 [2024-07-25 13:04:52.182135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.813 [2024-07-25 13:04:52.182165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.813 [2024-07-25 13:04:52.194134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.813 [2024-07-25 13:04:52.194162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.813 [2024-07-25 13:04:52.205518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.813 [2024-07-25 13:04:52.206144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.813 [2024-07-25 13:04:52.206173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.813 [2024-07-25 13:04:52.218147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.813 [2024-07-25 13:04:52.218183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.813 [2024-07-25 13:04:52.230145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.813 [2024-07-25 13:04:52.230173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.813 [2024-07-25 13:04:52.242147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.813 [2024-07-25 13:04:52.242175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.813 [2024-07-25 13:04:52.254154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.813 [2024-07-25 13:04:52.254183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.813 [2024-07-25 13:04:52.266163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.813 [2024-07-25 13:04:52.266195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.813 [2024-07-25 13:04:52.278164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.813 [2024-07-25 13:04:52.278195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.813 [2024-07-25 13:04:52.290158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.813 [2024-07-25 13:04:52.290187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.813 [2024-07-25 13:04:52.302160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.813 [2024-07-25 13:04:52.302187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.813 [2024-07-25 13:04:52.313571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.813 [2024-07-25 13:04:52.314177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.813 [2024-07-25 13:04:52.314204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.813 [2024-07-25 13:04:52.326174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.813 [2024-07-25 13:04:52.326216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.813 [2024-07-25 13:04:52.338188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.813 [2024-07-25 13:04:52.338235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.813 [2024-07-25 13:04:52.350202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.813 [2024-07-25 13:04:52.350247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.813 [2024-07-25 13:04:52.362189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.813 [2024-07-25 13:04:52.362235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.813 [2024-07-25 13:04:52.374208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.813 [2024-07-25 13:04:52.374260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.813 [2024-07-25 13:04:52.382267] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:01.813 [2024-07-25 13:04:52.386219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.813 [2024-07-25 13:04:52.386248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.813 [2024-07-25 13:04:52.398201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.398248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.410199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.410242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.422194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.422252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.434250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.434285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.446258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.446292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.458271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.458307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.470281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.470329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.482283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.482335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.494305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.494354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 Running I/O for 5 seconds... 00:09:01.814 [2024-07-25 13:04:52.506324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.506351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.523253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.523305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.540126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.540161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.556030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.556065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.567456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.567490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.583693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.583739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.600204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.600269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.614947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.614998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.631168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.631219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.648714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.648750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.662787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.662822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.678677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.678711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.696097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.696136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.712540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.712576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.728977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.729017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.747210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.747245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.762624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.762661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.772154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.772192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.788006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.788041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.805367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.805402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.820962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.820998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.829766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.829800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.846089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.846124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.862793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.862860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.878752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.878797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.894518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.894552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.903911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.903977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.919824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.919906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.936602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.936641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.954052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.954088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.969957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.970024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:52.988520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:52.988559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:53.004463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:53.004502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:53.020825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:53.020878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:53.038252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:53.038303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:53.053458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:53.053496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:53.068938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:53.068976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:53.079548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:53.079586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:53.093272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:53.093308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:53.107265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:53.107300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:53.123212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:53.123249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:53.140254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:53.140291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:53.156266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:53.156303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:53.166329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.814 [2024-07-25 13:04:53.166368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.814 [2024-07-25 13:04:53.183236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.183288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.198090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.198126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.216417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.216453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.232535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.232573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.249203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.249243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.265973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.266008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.282515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.282553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.298055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.298090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.317444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.317480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.331894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.331956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.347779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.347815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.364571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.364606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.383609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.383646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.398432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.398468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.408513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.408549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.422982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.423018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.438762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.438797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.455233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.455268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.470790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.470825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.480417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.480452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.496254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.496289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.513156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.513206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.530232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.530267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.546287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.546336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.564216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.564251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.578884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.578951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.589433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.589471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.604019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.604054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.621102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.621166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.636596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.636631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.646652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.646689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.662013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.662047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.680588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.680625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.694929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.694966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.710193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.710228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.728259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.728295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.742901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.742934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.758078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.758112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.768180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.768231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.783019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.783055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.800557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.800594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.816691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.816737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.834833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.834914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.849235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.849270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.865007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.865043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.883837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.883934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.898601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.898639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.914261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.914296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.930568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.930605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.947590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.947627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.964067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.964102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.982534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.982571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.815 [2024-07-25 13:04:53.998124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.815 [2024-07-25 13:04:53.998160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.075 [2024-07-25 13:04:54.007408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.075 [2024-07-25 13:04:54.007456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.075 [2024-07-25 13:04:54.022792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.075 [2024-07-25 13:04:54.022876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.075 [2024-07-25 13:04:54.033039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.075 [2024-07-25 13:04:54.033075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.075 [2024-07-25 13:04:54.048406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.075 [2024-07-25 13:04:54.048443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.075 [2024-07-25 13:04:54.064728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.075 [2024-07-25 13:04:54.064765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.075 [2024-07-25 13:04:54.083446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.075 [2024-07-25 13:04:54.083483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.075 [2024-07-25 13:04:54.098812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.075 [2024-07-25 13:04:54.098891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.075 [2024-07-25 13:04:54.116134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.075 [2024-07-25 13:04:54.116171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.075 [2024-07-25 13:04:54.132552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.075 [2024-07-25 13:04:54.132590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.075 [2024-07-25 13:04:54.149652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.075 [2024-07-25 13:04:54.149689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.075 [2024-07-25 13:04:54.167126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.075 [2024-07-25 13:04:54.167161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.075 [2024-07-25 13:04:54.182148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.075 [2024-07-25 13:04:54.182186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.075 [2024-07-25 13:04:54.197767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.075 [2024-07-25 13:04:54.197802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.075 [2024-07-25 13:04:54.207549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.075 [2024-07-25 13:04:54.207585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.075 [2024-07-25 13:04:54.222787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.075 [2024-07-25 13:04:54.222823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.075 [2024-07-25 13:04:54.240129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.075 [2024-07-25 13:04:54.240172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.075 [2024-07-25 13:04:54.255765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.075 [2024-07-25 13:04:54.255802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.334 [2024-07-25 13:04:54.265827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.334 [2024-07-25 13:04:54.265918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.334 [2024-07-25 13:04:54.280804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.334 [2024-07-25 13:04:54.280881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.334 [2024-07-25 13:04:54.299111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.334 [2024-07-25 13:04:54.299146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.334 [2024-07-25 13:04:54.314339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.334 [2024-07-25 13:04:54.314378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.334 [2024-07-25 13:04:54.332914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.334 [2024-07-25 13:04:54.332950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.334 [2024-07-25 13:04:54.348348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.334 [2024-07-25 13:04:54.348397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.334 [2024-07-25 13:04:54.367005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.334 [2024-07-25 13:04:54.367043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.334 [2024-07-25 13:04:54.382451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.334 [2024-07-25 13:04:54.382488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.334 [2024-07-25 13:04:54.400549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.334 [2024-07-25 13:04:54.400585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.334 [2024-07-25 13:04:54.417458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.334 [2024-07-25 13:04:54.417493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.334 [2024-07-25 13:04:54.435037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.334 [2024-07-25 13:04:54.435072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.334 [2024-07-25 13:04:54.449600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.334 [2024-07-25 13:04:54.449636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.334 [2024-07-25 13:04:54.465181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.334 [2024-07-25 13:04:54.465218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.334 [2024-07-25 13:04:54.482625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.334 [2024-07-25 13:04:54.482662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.334 [2024-07-25 13:04:54.497526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.334 [2024-07-25 13:04:54.497562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.334 [2024-07-25 13:04:54.507123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.334 [2024-07-25 13:04:54.507161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.334 [2024-07-25 13:04:54.523737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.334 [2024-07-25 13:04:54.523774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.593 [2024-07-25 13:04:54.540346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.593 [2024-07-25 13:04:54.540382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.593 [2024-07-25 13:04:54.557132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.593 [2024-07-25 13:04:54.557184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.593 [2024-07-25 13:04:54.572992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.593 [2024-07-25 13:04:54.573028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.593 [2024-07-25 13:04:54.590144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.593 [2024-07-25 13:04:54.590179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.593 [2024-07-25 13:04:54.606306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.593 [2024-07-25 13:04:54.606374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.593 [2024-07-25 13:04:54.624113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.593 [2024-07-25 13:04:54.624151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.593 [2024-07-25 13:04:54.638806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.593 [2024-07-25 13:04:54.638892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.593 [2024-07-25 13:04:54.653578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.594 [2024-07-25 13:04:54.653620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.594 [2024-07-25 13:04:54.668918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.594 [2024-07-25 13:04:54.668955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.594 [2024-07-25 13:04:54.684890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.594 [2024-07-25 13:04:54.684926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.594 [2024-07-25 13:04:54.701247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.594 [2024-07-25 13:04:54.701281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.594 [2024-07-25 13:04:54.718992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.594 [2024-07-25 13:04:54.719027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.594 [2024-07-25 13:04:54.735647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.594 [2024-07-25 13:04:54.735714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.594 [2024-07-25 13:04:54.751945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.594 [2024-07-25 13:04:54.751989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.594 [2024-07-25 13:04:54.769178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.594 [2024-07-25 13:04:54.769213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.853 [2024-07-25 13:04:54.785071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.853 [2024-07-25 13:04:54.785144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.853 [2024-07-25 13:04:54.803378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.853 [2024-07-25 13:04:54.803415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.853 [2024-07-25 13:04:54.818631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.853 [2024-07-25 13:04:54.818668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.853 [2024-07-25 13:04:54.836567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.853 [2024-07-25 13:04:54.836730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.853 [2024-07-25 13:04:54.852619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.853 [2024-07-25 13:04:54.852851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.853 [2024-07-25 13:04:54.869010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.853 [2024-07-25 13:04:54.869233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.853 [2024-07-25 13:04:54.884804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.853 [2024-07-25 13:04:54.885007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.853 [2024-07-25 13:04:54.903535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.853 [2024-07-25 13:04:54.903712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.853 [2024-07-25 13:04:54.918087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.853 [2024-07-25 13:04:54.918281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.853 [2024-07-25 13:04:54.934000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.853 [2024-07-25 13:04:54.934176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.853 [2024-07-25 13:04:54.951760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.853 [2024-07-25 13:04:54.951796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.853 [2024-07-25 13:04:54.966156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.853 [2024-07-25 13:04:54.966190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.853 [2024-07-25 13:04:54.982122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.853 [2024-07-25 13:04:54.982173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.853 [2024-07-25 13:04:54.998030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.853 [2024-07-25 13:04:54.998063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.853 [2024-07-25 13:04:55.014875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.853 [2024-07-25 13:04:55.014941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.853 [2024-07-25 13:04:55.031831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.853 [2024-07-25 13:04:55.031910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.113 [2024-07-25 13:04:55.048119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.113 [2024-07-25 13:04:55.048171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.113 [2024-07-25 13:04:55.066446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.113 [2024-07-25 13:04:55.066486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.113 [2024-07-25 13:04:55.081556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.113 [2024-07-25 13:04:55.081597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.113 [2024-07-25 13:04:55.091856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.113 [2024-07-25 13:04:55.091922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.113 [2024-07-25 13:04:55.106728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.113 [2024-07-25 13:04:55.106777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.113 [2024-07-25 13:04:55.124359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.113 [2024-07-25 13:04:55.124397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.113 [2024-07-25 13:04:55.141338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.113 [2024-07-25 13:04:55.141376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.113 [2024-07-25 13:04:55.157418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.113 [2024-07-25 13:04:55.157455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.113 [2024-07-25 13:04:55.174505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.113 [2024-07-25 13:04:55.174542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.113 [2024-07-25 13:04:55.190477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.113 [2024-07-25 13:04:55.190512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.113 [2024-07-25 13:04:55.207285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.113 [2024-07-25 13:04:55.207329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.113 [2024-07-25 13:04:55.229689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.113 [2024-07-25 13:04:55.229736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.113 [2024-07-25 13:04:55.240302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.113 [2024-07-25 13:04:55.240497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.113 [2024-07-25 13:04:55.253254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.113 [2024-07-25 13:04:55.253421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.113 [2024-07-25 13:04:55.272568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.113 [2024-07-25 13:04:55.272732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.113 [2024-07-25 13:04:55.287361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.113 [2024-07-25 13:04:55.287518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.373 [2024-07-25 13:04:55.302796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.373 [2024-07-25 13:04:55.303030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.373 [2024-07-25 13:04:55.318559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.373 [2024-07-25 13:04:55.318745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.373 [2024-07-25 13:04:55.335561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.373 [2024-07-25 13:04:55.335695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.373 [2024-07-25 13:04:55.353429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.373 [2024-07-25 13:04:55.353571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.373 [2024-07-25 13:04:55.367884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.373 [2024-07-25 13:04:55.368059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.373 [2024-07-25 13:04:55.384810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.373 [2024-07-25 13:04:55.384982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.373 [2024-07-25 13:04:55.399725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.373 [2024-07-25 13:04:55.399891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.373 [2024-07-25 13:04:55.409599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.373 [2024-07-25 13:04:55.409745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.373 [2024-07-25 13:04:55.426173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.373 [2024-07-25 13:04:55.426363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.373 [2024-07-25 13:04:55.440979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.373 [2024-07-25 13:04:55.441097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.373 [2024-07-25 13:04:55.457353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.373 [2024-07-25 13:04:55.457486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.373 [2024-07-25 13:04:55.474987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.373 [2024-07-25 13:04:55.475120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.373 [2024-07-25 13:04:55.490458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.373 [2024-07-25 13:04:55.490585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.373 [2024-07-25 13:04:55.500492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.373 [2024-07-25 13:04:55.500621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.373 [2024-07-25 13:04:55.516756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.373 [2024-07-25 13:04:55.516918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.373 [2024-07-25 13:04:55.532488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.373 [2024-07-25 13:04:55.532614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.373 [2024-07-25 13:04:55.550093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.373 [2024-07-25 13:04:55.550251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.633 [2024-07-25 13:04:55.564700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.633 [2024-07-25 13:04:55.564897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.633 [2024-07-25 13:04:55.580738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.633 [2024-07-25 13:04:55.580943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.633 [2024-07-25 13:04:55.597726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.633 [2024-07-25 13:04:55.597774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.633 [2024-07-25 13:04:55.615122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.633 [2024-07-25 13:04:55.615169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.633 [2024-07-25 13:04:55.631632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.633 [2024-07-25 13:04:55.631661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.633 [2024-07-25 13:04:55.647246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.633 [2024-07-25 13:04:55.647275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.633 [2024-07-25 13:04:55.656232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.633 [2024-07-25 13:04:55.656262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.633 [2024-07-25 13:04:55.672165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.633 [2024-07-25 13:04:55.672211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.633 [2024-07-25 13:04:55.681895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.633 [2024-07-25 13:04:55.681936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.633 [2024-07-25 13:04:55.698084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.633 [2024-07-25 13:04:55.698114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.633 [2024-07-25 13:04:55.716400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.633 [2024-07-25 13:04:55.716446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.633 [2024-07-25 13:04:55.732285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.633 [2024-07-25 13:04:55.732333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.633 [2024-07-25 13:04:55.748071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.633 [2024-07-25 13:04:55.748099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.633 [2024-07-25 13:04:55.766823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.633 [2024-07-25 13:04:55.766862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.633 [2024-07-25 13:04:55.781735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.633 [2024-07-25 13:04:55.781764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.633 [2024-07-25 13:04:55.798451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.633 [2024-07-25 13:04:55.798482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.633 [2024-07-25 13:04:55.815955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.633 [2024-07-25 13:04:55.815985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.892 [2024-07-25 13:04:55.830519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.892 [2024-07-25 13:04:55.830550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.892 [2024-07-25 13:04:55.846409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.892 [2024-07-25 13:04:55.846439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.892 [2024-07-25 13:04:55.864295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.892 [2024-07-25 13:04:55.864324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.892 [2024-07-25 13:04:55.878901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.893 [2024-07-25 13:04:55.878982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.893 [2024-07-25 13:04:55.894444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.893 [2024-07-25 13:04:55.894474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.893 [2024-07-25 13:04:55.903740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.893 [2024-07-25 13:04:55.903772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.893 [2024-07-25 13:04:55.919994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.893 [2024-07-25 13:04:55.920023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.893 [2024-07-25 13:04:55.937781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.893 [2024-07-25 13:04:55.937810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.893 [2024-07-25 13:04:55.952533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.893 [2024-07-25 13:04:55.952563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.893 [2024-07-25 13:04:55.970111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.893 [2024-07-25 13:04:55.970139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.893 [2024-07-25 13:04:55.984626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.893 [2024-07-25 13:04:55.984657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.893 [2024-07-25 13:04:55.994411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.893 [2024-07-25 13:04:55.994441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.893 [2024-07-25 13:04:56.007268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.893 [2024-07-25 13:04:56.007299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.893 [2024-07-25 13:04:56.023485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.893 [2024-07-25 13:04:56.023521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.893 [2024-07-25 13:04:56.040333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.893 [2024-07-25 13:04:56.040365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.893 [2024-07-25 13:04:56.056582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.893 [2024-07-25 13:04:56.056615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.893 [2024-07-25 13:04:56.074154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.893 [2024-07-25 13:04:56.074186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.152 [2024-07-25 13:04:56.089303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.152 [2024-07-25 13:04:56.089354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.152 [2024-07-25 13:04:56.105432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.152 [2024-07-25 13:04:56.105464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.152 [2024-07-25 13:04:56.122378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.152 [2024-07-25 13:04:56.122410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.152 [2024-07-25 13:04:56.137790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.152 [2024-07-25 13:04:56.137818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.152 [2024-07-25 13:04:56.148261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.152 [2024-07-25 13:04:56.148293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.152 [2024-07-25 13:04:56.163925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.152 [2024-07-25 13:04:56.163954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.152 [2024-07-25 13:04:56.179079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.152 [2024-07-25 13:04:56.179109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.152 [2024-07-25 13:04:56.188981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.152 [2024-07-25 13:04:56.189012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.152 [2024-07-25 13:04:56.204954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.152 [2024-07-25 13:04:56.204984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.152 [2024-07-25 13:04:56.221425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.152 [2024-07-25 13:04:56.221455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.152 [2024-07-25 13:04:56.237848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.152 [2024-07-25 13:04:56.237909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.152 [2024-07-25 13:04:56.256676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.152 [2024-07-25 13:04:56.256717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.152 [2024-07-25 13:04:56.271834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.152 [2024-07-25 13:04:56.271887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.152 [2024-07-25 13:04:56.288056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.152 [2024-07-25 13:04:56.288086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.152 [2024-07-25 13:04:56.305826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.152 [2024-07-25 13:04:56.305866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.152 [2024-07-25 13:04:56.321255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.152 [2024-07-25 13:04:56.321308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.152 [2024-07-25 13:04:56.331335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.152 [2024-07-25 13:04:56.331364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.411 [2024-07-25 13:04:56.346541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.412 [2024-07-25 13:04:56.346572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.412 [2024-07-25 13:04:56.363213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.412 [2024-07-25 13:04:56.363242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.412 [2024-07-25 13:04:56.382124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.412 [2024-07-25 13:04:56.382154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.412 [2024-07-25 13:04:56.396785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.412 [2024-07-25 13:04:56.396843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.412 [2024-07-25 13:04:56.406889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.412 [2024-07-25 13:04:56.406927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.412 [2024-07-25 13:04:56.421920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.412 [2024-07-25 13:04:56.421953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.412 [2024-07-25 13:04:56.437147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.412 [2024-07-25 13:04:56.437190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.412 [2024-07-25 13:04:56.454502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.412 [2024-07-25 13:04:56.454534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.412 [2024-07-25 13:04:56.471795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.412 [2024-07-25 13:04:56.471827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.412 [2024-07-25 13:04:56.487143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.412 [2024-07-25 13:04:56.487174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.412 [2024-07-25 13:04:56.497300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.412 [2024-07-25 13:04:56.497340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.412 [2024-07-25 13:04:56.514244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.412 [2024-07-25 13:04:56.514274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.412 [2024-07-25 13:04:56.530069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.412 [2024-07-25 13:04:56.530098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.412 [2024-07-25 13:04:56.546410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.412 [2024-07-25 13:04:56.546441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.412 [2024-07-25 13:04:56.563416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.412 [2024-07-25 13:04:56.563464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.412 [2024-07-25 13:04:56.579896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.412 [2024-07-25 13:04:56.579936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.412 [2024-07-25 13:04:56.595745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.412 [2024-07-25 13:04:56.595774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.671 [2024-07-25 13:04:56.604661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.671 [2024-07-25 13:04:56.604701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.671 [2024-07-25 13:04:56.621482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.671 [2024-07-25 13:04:56.621511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.671 [2024-07-25 13:04:56.637544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.671 [2024-07-25 13:04:56.637573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.671 [2024-07-25 13:04:56.648000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.671 [2024-07-25 13:04:56.648034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.671 [2024-07-25 13:04:56.664203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.671 [2024-07-25 13:04:56.664264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.671 [2024-07-25 13:04:56.680260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.671 [2024-07-25 13:04:56.680291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.671 [2024-07-25 13:04:56.699042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.671 [2024-07-25 13:04:56.699073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.671 [2024-07-25 13:04:56.714076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.671 [2024-07-25 13:04:56.714107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.671 [2024-07-25 13:04:56.729762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.671 [2024-07-25 13:04:56.729792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.671 [2024-07-25 13:04:56.739887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.671 [2024-07-25 13:04:56.739926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.671 [2024-07-25 13:04:56.755710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.671 [2024-07-25 13:04:56.755740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.671 [2024-07-25 13:04:56.772532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.671 [2024-07-25 13:04:56.772562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.671 [2024-07-25 13:04:56.789576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.671 [2024-07-25 13:04:56.789607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.671 [2024-07-25 13:04:56.806073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.671 [2024-07-25 13:04:56.806102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.671 [2024-07-25 13:04:56.822588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.671 [2024-07-25 13:04:56.822619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.671 [2024-07-25 13:04:56.839169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.671 [2024-07-25 13:04:56.839199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.671 [2024-07-25 13:04:56.857069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.671 [2024-07-25 13:04:56.857123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.931 [2024-07-25 13:04:56.873056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.931 [2024-07-25 13:04:56.873105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.931 [2024-07-25 13:04:56.890755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.931 [2024-07-25 13:04:56.890784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.931 [2024-07-25 13:04:56.905660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.931 [2024-07-25 13:04:56.905704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.931 [2024-07-25 13:04:56.921762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.931 [2024-07-25 13:04:56.921793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.931 [2024-07-25 13:04:56.941189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.931 [2024-07-25 13:04:56.941226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.931 [2024-07-25 13:04:56.955424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.931 [2024-07-25 13:04:56.955455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.931 [2024-07-25 13:04:56.971539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.931 [2024-07-25 13:04:56.971573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.931 [2024-07-25 13:04:56.988234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.931 [2024-07-25 13:04:56.988264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.931 [2024-07-25 13:04:57.004935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.931 [2024-07-25 13:04:57.004965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.931 [2024-07-25 13:04:57.020762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.931 [2024-07-25 13:04:57.020821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.931 [2024-07-25 13:04:57.037687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.931 [2024-07-25 13:04:57.037728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.931 [2024-07-25 13:04:57.054336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.931 [2024-07-25 13:04:57.054365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.931 [2024-07-25 13:04:57.070731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.931 [2024-07-25 13:04:57.070762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.931 [2024-07-25 13:04:57.088151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.931 [2024-07-25 13:04:57.088181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.931 [2024-07-25 13:04:57.104476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.931 [2024-07-25 13:04:57.104508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.931 [2024-07-25 13:04:57.120542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.931 [2024-07-25 13:04:57.120575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.190 [2024-07-25 13:04:57.130645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.190 [2024-07-25 13:04:57.130707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.190 [2024-07-25 13:04:57.146397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.190 [2024-07-25 13:04:57.146429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.190 [2024-07-25 13:04:57.162866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.190 [2024-07-25 13:04:57.162903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.190 [2024-07-25 13:04:57.179553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.190 [2024-07-25 13:04:57.179601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.190 [2024-07-25 13:04:57.196143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.190 [2024-07-25 13:04:57.196188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.190 [2024-07-25 13:04:57.213954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.190 [2024-07-25 13:04:57.214003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.190 [2024-07-25 13:04:57.228710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.190 [2024-07-25 13:04:57.228743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.190 [2024-07-25 13:04:57.245088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.190 [2024-07-25 13:04:57.245135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.190 [2024-07-25 13:04:57.261860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.190 [2024-07-25 13:04:57.261906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.190 [2024-07-25 13:04:57.278263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.190 [2024-07-25 13:04:57.278328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.190 [2024-07-25 13:04:57.294910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.190 [2024-07-25 13:04:57.294969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.190 [2024-07-25 13:04:57.311591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.190 [2024-07-25 13:04:57.311624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.190 [2024-07-25 13:04:57.327896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.190 [2024-07-25 13:04:57.327965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.190 [2024-07-25 13:04:57.346998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.190 [2024-07-25 13:04:57.347033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.190 [2024-07-25 13:04:57.361135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.190 [2024-07-25 13:04:57.361170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.190 [2024-07-25 13:04:57.377554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.190 [2024-07-25 13:04:57.377604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.449 [2024-07-25 13:04:57.393323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.449 [2024-07-25 13:04:57.393372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.449 [2024-07-25 13:04:57.408924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.450 [2024-07-25 13:04:57.408973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.450 [2024-07-25 13:04:57.426509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.450 [2024-07-25 13:04:57.426567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.450 [2024-07-25 13:04:57.441600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.450 [2024-07-25 13:04:57.441633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.450 [2024-07-25 13:04:57.450923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.450 [2024-07-25 13:04:57.450970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.450 [2024-07-25 13:04:57.466776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.450 [2024-07-25 13:04:57.466825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.450 [2024-07-25 13:04:57.482611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.450 [2024-07-25 13:04:57.482660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.450 [2024-07-25 13:04:57.500398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.450 [2024-07-25 13:04:57.500447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.450 [2024-07-25 13:04:57.511272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.450 [2024-07-25 13:04:57.511318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.450 00:09:05.450 Latency(us) 00:09:05.450 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.450 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:05.450 Nvme1n1 : 5.01 11618.56 90.77 0.00 0.00 11003.69 4557.73 18826.71 00:09:05.450 =================================================================================================================== 00:09:05.450 Total : 11618.56 90.77 0.00 0.00 11003.69 4557.73 18826.71 00:09:05.450 [2024-07-25 13:04:57.523276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.450 [2024-07-25 13:04:57.523322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.450 [2024-07-25 13:04:57.535306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.450 [2024-07-25 13:04:57.535370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.450 [2024-07-25 13:04:57.547302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.450 [2024-07-25 13:04:57.547370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.450 [2024-07-25 13:04:57.559299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.450 [2024-07-25 13:04:57.559367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.450 [2024-07-25 13:04:57.571302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.450 [2024-07-25 13:04:57.571365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.450 [2024-07-25 13:04:57.583356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.450 [2024-07-25 13:04:57.583401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.450 [2024-07-25 13:04:57.595343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.450 [2024-07-25 13:04:57.595394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.450 [2024-07-25 13:04:57.607340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.450 [2024-07-25 13:04:57.607375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.450 [2024-07-25 13:04:57.619359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.450 [2024-07-25 13:04:57.619409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.450 [2024-07-25 13:04:57.631361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.450 [2024-07-25 13:04:57.631410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.709 [2024-07-25 13:04:57.643368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.709 [2024-07-25 13:04:57.643417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.709 [2024-07-25 13:04:57.655327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.709 [2024-07-25 13:04:57.655370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.709 [2024-07-25 13:04:57.667350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.709 [2024-07-25 13:04:57.667408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.709 [2024-07-25 13:04:57.679410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.709 [2024-07-25 13:04:57.679474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.709 [2024-07-25 13:04:57.691408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.709 [2024-07-25 13:04:57.691456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.709 [2024-07-25 13:04:57.703383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.709 [2024-07-25 13:04:57.703429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.709 [2024-07-25 13:04:57.715396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.709 [2024-07-25 13:04:57.715465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.709 [2024-07-25 13:04:57.727399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.709 [2024-07-25 13:04:57.727445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.709 [2024-07-25 13:04:57.739406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.709 [2024-07-25 13:04:57.739447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.709 [2024-07-25 13:04:57.751431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.709 [2024-07-25 13:04:57.751455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.709 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (66823) - No such process 00:09:05.709 13:04:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 66823 00:09:05.709 13:04:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.709 13:04:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.709 13:04:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.709 13:04:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.709 13:04:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:05.709 13:04:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.709 13:04:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.709 delay0 00:09:05.709 13:04:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.709 13:04:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:05.709 13:04:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.709 13:04:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.709 13:04:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.709 13:04:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:05.968 [2024-07-25 13:04:57.942090] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:12.533 Initializing NVMe Controllers 00:09:12.533 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:12.533 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:12.533 Initialization complete. Launching workers. 00:09:12.533 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 808 00:09:12.533 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1095, failed to submit 33 00:09:12.534 success 976, unsuccess 119, failed 0 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:12.534 rmmod nvme_tcp 00:09:12.534 rmmod nvme_fabrics 00:09:12.534 rmmod nvme_keyring 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 66674 ']' 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 66674 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 66674 ']' 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 66674 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66674 00:09:12.534 killing process with pid 66674 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66674' 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 66674 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 66674 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:12.534 00:09:12.534 real 0m24.847s 00:09:12.534 user 0m40.774s 00:09:12.534 sys 0m6.833s 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:12.534 ************************************ 00:09:12.534 END TEST nvmf_zcopy 00:09:12.534 ************************************ 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:12.534 ************************************ 00:09:12.534 START TEST nvmf_nmic 00:09:12.534 ************************************ 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:12.534 * Looking for test storage... 00:09:12.534 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.534 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.535 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.535 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:12.535 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:12.535 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:12.535 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:12.535 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:12.535 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:12.535 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:12.535 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:12.535 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:12.535 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:12.535 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:12.535 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.535 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.535 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:12.794 Cannot find device "nvmf_tgt_br" 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:12.794 Cannot find device "nvmf_tgt_br2" 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:12.794 Cannot find device "nvmf_tgt_br" 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:12.794 Cannot find device "nvmf_tgt_br2" 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:12.794 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:12.794 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:12.794 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:13.054 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:13.054 13:05:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:13.054 13:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:13.054 13:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:13.054 13:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:13.054 13:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:13.054 13:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:13.054 13:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:13.054 13:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:13.054 13:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:13.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:13.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:09:13.054 00:09:13.054 --- 10.0.0.2 ping statistics --- 00:09:13.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.054 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:09:13.054 13:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:13.054 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:13.054 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:09:13.054 00:09:13.054 --- 10.0.0.3 ping statistics --- 00:09:13.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.054 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:09:13.054 13:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:13.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:13.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:09:13.054 00:09:13.054 --- 10.0.0.1 ping statistics --- 00:09:13.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.054 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:09:13.054 13:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:13.054 13:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:09:13.054 13:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:13.054 13:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:13.054 13:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:13.054 13:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:13.054 13:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:13.054 13:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:13.054 13:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:13.054 13:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:13.054 13:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:13.054 13:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:13.054 13:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.054 13:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=67150 00:09:13.054 13:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:13.054 13:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 67150 00:09:13.054 13:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 67150 ']' 00:09:13.054 13:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.054 13:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:13.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.054 13:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.054 13:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:13.054 13:05:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:13.054 [2024-07-25 13:05:05.161505] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:13.054 [2024-07-25 13:05:05.161600] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.313 [2024-07-25 13:05:05.303519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:13.313 [2024-07-25 13:05:05.415288] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:13.313 [2024-07-25 13:05:05.415355] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:13.313 [2024-07-25 13:05:05.415366] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:13.313 [2024-07-25 13:05:05.415374] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:13.313 [2024-07-25 13:05:05.415381] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:13.313 [2024-07-25 13:05:05.415552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.313 [2024-07-25 13:05:05.416042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:13.313 [2024-07-25 13:05:05.416690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:13.313 [2024-07-25 13:05:05.416751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.313 [2024-07-25 13:05:05.473021] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.248 [2024-07-25 13:05:06.228359] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.248 Malloc0 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.248 [2024-07-25 13:05:06.300475] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.248 test case1: single bdev can't be used in multiple subsystems 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.248 [2024-07-25 13:05:06.324319] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:14.248 [2024-07-25 13:05:06.324359] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:14.248 [2024-07-25 13:05:06.324371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.248 request: 00:09:14.248 { 00:09:14.248 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:14.248 "namespace": { 00:09:14.248 "bdev_name": "Malloc0", 00:09:14.248 "no_auto_visible": false 00:09:14.248 }, 00:09:14.248 "method": "nvmf_subsystem_add_ns", 00:09:14.248 "req_id": 1 00:09:14.248 } 00:09:14.248 Got JSON-RPC error response 00:09:14.248 response: 00:09:14.248 { 00:09:14.248 "code": -32602, 00:09:14.248 "message": "Invalid parameters" 00:09:14.248 } 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:14.248 Adding namespace failed - expected result. 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:14.248 test case2: host connect to nvmf target in multiple paths 00:09:14.248 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:14.249 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:14.249 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.249 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.249 [2024-07-25 13:05:06.336457] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:14.249 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.249 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid=0dc0e430-4c84-41eb-b0bf-db443f090fb1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:14.507 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid=0dc0e430-4c84-41eb-b0bf-db443f090fb1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:14.507 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:14.507 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:14.507 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:14.508 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:14.508 13:05:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:17.039 13:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:17.039 13:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:17.039 13:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:17.039 13:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:17.039 13:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:17.039 13:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:17.039 13:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:17.039 [global] 00:09:17.039 thread=1 00:09:17.039 invalidate=1 00:09:17.039 rw=write 00:09:17.039 time_based=1 00:09:17.039 runtime=1 00:09:17.039 ioengine=libaio 00:09:17.039 direct=1 00:09:17.039 bs=4096 00:09:17.039 iodepth=1 00:09:17.039 norandommap=0 00:09:17.039 numjobs=1 00:09:17.039 00:09:17.039 verify_dump=1 00:09:17.039 verify_backlog=512 00:09:17.039 verify_state_save=0 00:09:17.039 do_verify=1 00:09:17.039 verify=crc32c-intel 00:09:17.039 [job0] 00:09:17.039 filename=/dev/nvme0n1 00:09:17.039 Could not set queue depth (nvme0n1) 00:09:17.039 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:17.039 fio-3.35 00:09:17.039 Starting 1 thread 00:09:17.973 00:09:17.973 job0: (groupid=0, jobs=1): err= 0: pid=67247: Thu Jul 25 13:05:09 2024 00:09:17.973 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:17.973 slat (nsec): min=12043, max=51852, avg=15189.19, stdev=4060.05 00:09:17.973 clat (usec): min=132, max=767, avg=174.63, stdev=22.53 00:09:17.973 lat (usec): min=146, max=780, avg=189.81, stdev=22.87 00:09:17.973 clat percentiles (usec): 00:09:17.973 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 159], 00:09:17.973 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:09:17.973 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 200], 95.00th=[ 208], 00:09:17.973 | 99.00th=[ 225], 99.50th=[ 231], 99.90th=[ 297], 99.95th=[ 465], 00:09:17.973 | 99.99th=[ 766] 00:09:17.973 write: IOPS=3197, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1001msec); 0 zone resets 00:09:17.973 slat (usec): min=16, max=113, avg=21.77, stdev= 5.66 00:09:17.973 clat (usec): min=80, max=416, avg=105.18, stdev=15.56 00:09:17.973 lat (usec): min=98, max=434, avg=126.95, stdev=17.01 00:09:17.973 clat percentiles (usec): 00:09:17.973 | 1.00th=[ 85], 5.00th=[ 88], 10.00th=[ 91], 20.00th=[ 95], 00:09:17.973 | 30.00th=[ 97], 40.00th=[ 100], 50.00th=[ 102], 60.00th=[ 105], 00:09:17.973 | 70.00th=[ 110], 80.00th=[ 116], 90.00th=[ 124], 95.00th=[ 133], 00:09:17.973 | 99.00th=[ 147], 99.50th=[ 157], 99.90th=[ 237], 99.95th=[ 318], 00:09:17.973 | 99.99th=[ 416] 00:09:17.973 bw ( KiB/s): min=12688, max=12688, per=99.19%, avg=12688.00, stdev= 0.00, samples=1 00:09:17.973 iops : min= 3172, max= 3172, avg=3172.00, stdev= 0.00, samples=1 00:09:17.973 lat (usec) : 100=21.06%, 250=78.75%, 500=0.18%, 1000=0.02% 00:09:17.973 cpu : usr=2.60%, sys=8.80%, ctx=6273, majf=0, minf=2 00:09:17.973 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:17.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.973 issued rwts: total=3072,3201,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:17.973 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:17.973 00:09:17.973 Run status group 0 (all jobs): 00:09:17.973 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:17.973 WRITE: bw=12.5MiB/s (13.1MB/s), 12.5MiB/s-12.5MiB/s (13.1MB/s-13.1MB/s), io=12.5MiB (13.1MB), run=1001-1001msec 00:09:17.973 00:09:17.973 Disk stats (read/write): 00:09:17.973 nvme0n1: ios=2679/3072, merge=0/0, ticks=504/378, in_queue=882, util=91.38% 00:09:17.973 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:17.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:17.973 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:17.973 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:17.973 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:17.973 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:17.973 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:17.973 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:17.973 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:17.973 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:17.973 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:17.973 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:17.973 13:05:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:09:17.974 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:17.974 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:09:17.974 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:17.974 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:17.974 rmmod nvme_tcp 00:09:17.974 rmmod nvme_fabrics 00:09:17.974 rmmod nvme_keyring 00:09:17.974 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:17.974 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:09:17.974 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:09:17.974 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 67150 ']' 00:09:17.974 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 67150 00:09:17.974 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 67150 ']' 00:09:17.974 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 67150 00:09:17.974 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:17.974 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:17.974 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67150 00:09:17.974 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:17.974 killing process with pid 67150 00:09:17.974 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:17.974 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67150' 00:09:17.974 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 67150 00:09:17.974 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 67150 00:09:18.232 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:18.232 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:18.232 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:18.232 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:18.232 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:18.232 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.232 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:18.232 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.491 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:18.491 00:09:18.491 real 0m5.814s 00:09:18.491 user 0m18.461s 00:09:18.491 sys 0m2.330s 00:09:18.491 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:18.491 ************************************ 00:09:18.491 END TEST nvmf_nmic 00:09:18.491 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:18.491 ************************************ 00:09:18.491 13:05:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:18.491 13:05:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:18.491 13:05:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:18.491 13:05:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:18.491 ************************************ 00:09:18.491 START TEST nvmf_fio_target 00:09:18.491 ************************************ 00:09:18.491 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:18.491 * Looking for test storage... 00:09:18.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:18.491 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:18.491 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:18.491 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:18.491 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:18.491 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:18.491 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:18.492 Cannot find device "nvmf_tgt_br" 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:18.492 Cannot find device "nvmf_tgt_br2" 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:18.492 Cannot find device "nvmf_tgt_br" 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:18.492 Cannot find device "nvmf_tgt_br2" 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:09:18.492 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:18.752 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:18.752 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:18.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:18.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:09:18.752 00:09:18.752 --- 10.0.0.2 ping statistics --- 00:09:18.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.752 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:18.752 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:18.752 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:09:18.752 00:09:18.752 --- 10.0.0.3 ping statistics --- 00:09:18.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.752 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:18.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:18.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:09:18.752 00:09:18.752 --- 10.0.0.1 ping statistics --- 00:09:18.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.752 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=67427 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 67427 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 67427 ']' 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:18.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:18.752 13:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:19.010 [2024-07-25 13:05:10.997653] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:19.011 [2024-07-25 13:05:10.997753] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:19.011 [2024-07-25 13:05:11.141313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:19.269 [2024-07-25 13:05:11.262862] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:19.269 [2024-07-25 13:05:11.263406] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:19.269 [2024-07-25 13:05:11.263687] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:19.269 [2024-07-25 13:05:11.264052] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:19.269 [2024-07-25 13:05:11.264273] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:19.269 [2024-07-25 13:05:11.264638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.269 [2024-07-25 13:05:11.264715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:19.269 [2024-07-25 13:05:11.264877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:19.269 [2024-07-25 13:05:11.265161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.269 [2024-07-25 13:05:11.322612] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:19.836 13:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:19.836 13:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:09:19.836 13:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:19.836 13:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:19.836 13:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:19.836 13:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:19.836 13:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:20.095 [2024-07-25 13:05:12.239212] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.095 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:20.353 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:20.611 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:20.870 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:20.870 13:05:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:21.131 13:05:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:21.131 13:05:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:21.389 13:05:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:21.389 13:05:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:21.648 13:05:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:21.907 13:05:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:21.907 13:05:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:22.166 13:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:22.166 13:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:22.424 13:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:22.424 13:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:22.682 13:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:22.941 13:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:22.941 13:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:23.199 13:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:23.199 13:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:23.199 13:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:23.457 [2024-07-25 13:05:15.585851] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.457 13:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:23.715 13:05:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:23.974 13:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid=0dc0e430-4c84-41eb-b0bf-db443f090fb1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:24.232 13:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:24.232 13:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:24.232 13:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:24.232 13:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:24.232 13:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:24.232 13:05:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:26.132 13:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:26.132 13:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:26.132 13:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:26.132 13:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:26.132 13:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:26.133 13:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:26.133 13:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:26.133 [global] 00:09:26.133 thread=1 00:09:26.133 invalidate=1 00:09:26.133 rw=write 00:09:26.133 time_based=1 00:09:26.133 runtime=1 00:09:26.133 ioengine=libaio 00:09:26.133 direct=1 00:09:26.133 bs=4096 00:09:26.133 iodepth=1 00:09:26.133 norandommap=0 00:09:26.133 numjobs=1 00:09:26.133 00:09:26.133 verify_dump=1 00:09:26.133 verify_backlog=512 00:09:26.133 verify_state_save=0 00:09:26.133 do_verify=1 00:09:26.133 verify=crc32c-intel 00:09:26.133 [job0] 00:09:26.133 filename=/dev/nvme0n1 00:09:26.133 [job1] 00:09:26.133 filename=/dev/nvme0n2 00:09:26.133 [job2] 00:09:26.133 filename=/dev/nvme0n3 00:09:26.133 [job3] 00:09:26.133 filename=/dev/nvme0n4 00:09:26.133 Could not set queue depth (nvme0n1) 00:09:26.133 Could not set queue depth (nvme0n2) 00:09:26.133 Could not set queue depth (nvme0n3) 00:09:26.133 Could not set queue depth (nvme0n4) 00:09:26.391 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:26.391 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:26.391 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:26.391 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:26.391 fio-3.35 00:09:26.391 Starting 4 threads 00:09:27.764 00:09:27.764 job0: (groupid=0, jobs=1): err= 0: pid=67611: Thu Jul 25 13:05:19 2024 00:09:27.764 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:27.764 slat (nsec): min=11077, max=65241, avg=15836.96, stdev=4790.20 00:09:27.764 clat (usec): min=228, max=1956, avg=344.47, stdev=64.55 00:09:27.764 lat (usec): min=244, max=1968, avg=360.31, stdev=64.81 00:09:27.764 clat percentiles (usec): 00:09:27.764 | 1.00th=[ 253], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 314], 00:09:27.764 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 338], 00:09:27.764 | 70.00th=[ 347], 80.00th=[ 359], 90.00th=[ 400], 95.00th=[ 424], 00:09:27.764 | 99.00th=[ 570], 99.50th=[ 594], 99.90th=[ 799], 99.95th=[ 1958], 00:09:27.764 | 99.99th=[ 1958] 00:09:27.764 write: IOPS=1710, BW=6841KiB/s (7005kB/s)(6848KiB/1001msec); 0 zone resets 00:09:27.764 slat (nsec): min=14471, max=74172, avg=24412.82, stdev=5366.68 00:09:27.764 clat (usec): min=145, max=508, avg=232.74, stdev=39.07 00:09:27.764 lat (usec): min=176, max=527, avg=257.15, stdev=39.21 00:09:27.764 clat percentiles (usec): 00:09:27.764 | 1.00th=[ 161], 5.00th=[ 172], 10.00th=[ 180], 20.00th=[ 194], 00:09:27.764 | 30.00th=[ 212], 40.00th=[ 225], 50.00th=[ 237], 60.00th=[ 245], 00:09:27.764 | 70.00th=[ 255], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 289], 00:09:27.764 | 99.00th=[ 310], 99.50th=[ 355], 99.90th=[ 490], 99.95th=[ 510], 00:09:27.764 | 99.99th=[ 510] 00:09:27.764 bw ( KiB/s): min= 8192, max= 8192, per=24.52%, avg=8192.00, stdev= 0.00, samples=1 00:09:27.764 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:27.764 lat (usec) : 250=34.94%, 500=63.82%, 750=1.17%, 1000=0.03% 00:09:27.764 lat (msec) : 2=0.03% 00:09:27.764 cpu : usr=1.40%, sys=5.80%, ctx=3248, majf=0, minf=11 00:09:27.764 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:27.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.764 issued rwts: total=1536,1712,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.764 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:27.764 job1: (groupid=0, jobs=1): err= 0: pid=67612: Thu Jul 25 13:05:19 2024 00:09:27.764 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:27.764 slat (nsec): min=11878, max=44230, avg=14965.82, stdev=2595.02 00:09:27.764 clat (usec): min=127, max=457, avg=156.72, stdev=14.48 00:09:27.764 lat (usec): min=140, max=471, avg=171.68, stdev=15.10 00:09:27.764 clat percentiles (usec): 00:09:27.764 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 145], 00:09:27.764 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 159], 00:09:27.764 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 182], 00:09:27.764 | 99.00th=[ 194], 99.50th=[ 200], 99.90th=[ 219], 99.95th=[ 229], 00:09:27.764 | 99.99th=[ 457] 00:09:27.764 write: IOPS=3398, BW=13.3MiB/s (13.9MB/s)(13.3MiB/1001msec); 0 zone resets 00:09:27.764 slat (usec): min=14, max=116, avg=22.12, stdev= 4.97 00:09:27.764 clat (usec): min=84, max=224, avg=113.45, stdev=12.87 00:09:27.764 lat (usec): min=104, max=340, avg=135.56, stdev=14.11 00:09:27.764 clat percentiles (usec): 00:09:27.764 | 1.00th=[ 89], 5.00th=[ 93], 10.00th=[ 97], 20.00th=[ 103], 00:09:27.764 | 30.00th=[ 108], 40.00th=[ 111], 50.00th=[ 114], 60.00th=[ 117], 00:09:27.764 | 70.00th=[ 120], 80.00th=[ 123], 90.00th=[ 130], 95.00th=[ 137], 00:09:27.764 | 99.00th=[ 149], 99.50th=[ 155], 99.90th=[ 165], 99.95th=[ 169], 00:09:27.764 | 99.99th=[ 225] 00:09:27.764 bw ( KiB/s): min=13077, max=13077, per=39.14%, avg=13077.00, stdev= 0.00, samples=1 00:09:27.764 iops : min= 3269, max= 3269, avg=3269.00, stdev= 0.00, samples=1 00:09:27.764 lat (usec) : 100=7.72%, 250=92.26%, 500=0.02% 00:09:27.764 cpu : usr=3.40%, sys=8.60%, ctx=6474, majf=0, minf=5 00:09:27.764 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:27.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.764 issued rwts: total=3072,3402,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.764 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:27.764 job2: (groupid=0, jobs=1): err= 0: pid=67613: Thu Jul 25 13:05:19 2024 00:09:27.764 read: IOPS=1098, BW=4396KiB/s (4501kB/s)(4400KiB/1001msec) 00:09:27.764 slat (nsec): min=16656, max=69808, avg=30624.64, stdev=11543.12 00:09:27.764 clat (usec): min=195, max=809, avg=404.42, stdev=97.49 00:09:27.764 lat (usec): min=218, max=845, avg=435.04, stdev=104.93 00:09:27.764 clat percentiles (usec): 00:09:27.764 | 1.00th=[ 273], 5.00th=[ 306], 10.00th=[ 314], 20.00th=[ 326], 00:09:27.764 | 30.00th=[ 330], 40.00th=[ 338], 50.00th=[ 351], 60.00th=[ 441], 00:09:27.764 | 70.00th=[ 474], 80.00th=[ 494], 90.00th=[ 519], 95.00th=[ 619], 00:09:27.764 | 99.00th=[ 660], 99.50th=[ 676], 99.90th=[ 701], 99.95th=[ 807], 00:09:27.764 | 99.99th=[ 807] 00:09:27.764 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:27.764 slat (nsec): min=22229, max=97607, avg=38924.30, stdev=10437.87 00:09:27.764 clat (usec): min=103, max=3274, avg=294.70, stdev=124.56 00:09:27.764 lat (usec): min=129, max=3337, avg=333.62, stdev=129.63 00:09:27.764 clat percentiles (usec): 00:09:27.764 | 1.00th=[ 118], 5.00th=[ 137], 10.00th=[ 182], 20.00th=[ 212], 00:09:27.764 | 30.00th=[ 258], 40.00th=[ 273], 50.00th=[ 285], 60.00th=[ 293], 00:09:27.764 | 70.00th=[ 314], 80.00th=[ 375], 90.00th=[ 429], 95.00th=[ 449], 00:09:27.764 | 99.00th=[ 502], 99.50th=[ 519], 99.90th=[ 1795], 99.95th=[ 3261], 00:09:27.764 | 99.99th=[ 3261] 00:09:27.764 bw ( KiB/s): min= 6600, max= 6600, per=19.75%, avg=6600.00, stdev= 0.00, samples=1 00:09:27.764 iops : min= 1650, max= 1650, avg=1650.00, stdev= 0.00, samples=1 00:09:27.764 lat (usec) : 250=15.25%, 500=77.20%, 750=7.36%, 1000=0.11% 00:09:27.764 lat (msec) : 2=0.04%, 4=0.04% 00:09:27.764 cpu : usr=2.60%, sys=6.80%, ctx=2643, majf=0, minf=13 00:09:27.765 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:27.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.765 issued rwts: total=1100,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.765 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:27.765 job3: (groupid=0, jobs=1): err= 0: pid=67614: Thu Jul 25 13:05:19 2024 00:09:27.765 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:27.765 slat (nsec): min=11076, max=70813, avg=16610.16, stdev=4927.34 00:09:27.765 clat (usec): min=228, max=2050, avg=343.63, stdev=66.23 00:09:27.765 lat (usec): min=246, max=2068, avg=360.24, stdev=66.44 00:09:27.765 clat percentiles (usec): 00:09:27.765 | 1.00th=[ 260], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 310], 00:09:27.765 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 343], 00:09:27.765 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 396], 95.00th=[ 420], 00:09:27.765 | 99.00th=[ 553], 99.50th=[ 578], 99.90th=[ 816], 99.95th=[ 2057], 00:09:27.765 | 99.99th=[ 2057] 00:09:27.765 write: IOPS=1710, BW=6841KiB/s (7005kB/s)(6848KiB/1001msec); 0 zone resets 00:09:27.765 slat (usec): min=14, max=114, avg=23.78, stdev= 5.80 00:09:27.765 clat (usec): min=132, max=590, avg=233.53, stdev=38.51 00:09:27.765 lat (usec): min=177, max=614, avg=257.31, stdev=39.26 00:09:27.765 clat percentiles (usec): 00:09:27.765 | 1.00th=[ 165], 5.00th=[ 176], 10.00th=[ 184], 20.00th=[ 196], 00:09:27.765 | 30.00th=[ 208], 40.00th=[ 221], 50.00th=[ 237], 60.00th=[ 247], 00:09:27.765 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 289], 00:09:27.765 | 99.00th=[ 310], 99.50th=[ 351], 99.90th=[ 461], 99.95th=[ 594], 00:09:27.765 | 99.99th=[ 594] 00:09:27.765 bw ( KiB/s): min= 8175, max= 8175, per=24.47%, avg=8175.00, stdev= 0.00, samples=1 00:09:27.765 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:09:27.765 lat (usec) : 250=33.65%, 500=65.09%, 750=1.20%, 1000=0.03% 00:09:27.765 lat (msec) : 4=0.03% 00:09:27.765 cpu : usr=1.70%, sys=5.50%, ctx=3250, majf=0, minf=6 00:09:27.765 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:27.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.765 issued rwts: total=1536,1712,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.765 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:27.765 00:09:27.765 Run status group 0 (all jobs): 00:09:27.765 READ: bw=28.3MiB/s (29.6MB/s), 4396KiB/s-12.0MiB/s (4501kB/s-12.6MB/s), io=28.3MiB (29.7MB), run=1001-1001msec 00:09:27.765 WRITE: bw=32.6MiB/s (34.2MB/s), 6138KiB/s-13.3MiB/s (6285kB/s-13.9MB/s), io=32.7MiB (34.2MB), run=1001-1001msec 00:09:27.765 00:09:27.765 Disk stats (read/write): 00:09:27.765 nvme0n1: ios=1334/1536, merge=0/0, ticks=450/340, in_queue=790, util=87.58% 00:09:27.765 nvme0n2: ios=2595/2982, merge=0/0, ticks=449/366, in_queue=815, util=88.12% 00:09:27.765 nvme0n3: ios=1024/1193, merge=0/0, ticks=418/368, in_queue=786, util=88.97% 00:09:27.765 nvme0n4: ios=1284/1536, merge=0/0, ticks=420/333, in_queue=753, util=89.74% 00:09:27.765 13:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:27.765 [global] 00:09:27.765 thread=1 00:09:27.765 invalidate=1 00:09:27.765 rw=randwrite 00:09:27.765 time_based=1 00:09:27.765 runtime=1 00:09:27.765 ioengine=libaio 00:09:27.765 direct=1 00:09:27.765 bs=4096 00:09:27.765 iodepth=1 00:09:27.765 norandommap=0 00:09:27.765 numjobs=1 00:09:27.765 00:09:27.765 verify_dump=1 00:09:27.765 verify_backlog=512 00:09:27.765 verify_state_save=0 00:09:27.765 do_verify=1 00:09:27.765 verify=crc32c-intel 00:09:27.765 [job0] 00:09:27.765 filename=/dev/nvme0n1 00:09:27.765 [job1] 00:09:27.765 filename=/dev/nvme0n2 00:09:27.765 [job2] 00:09:27.765 filename=/dev/nvme0n3 00:09:27.765 [job3] 00:09:27.765 filename=/dev/nvme0n4 00:09:27.765 Could not set queue depth (nvme0n1) 00:09:27.765 Could not set queue depth (nvme0n2) 00:09:27.765 Could not set queue depth (nvme0n3) 00:09:27.765 Could not set queue depth (nvme0n4) 00:09:27.765 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:27.765 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:27.765 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:27.765 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:27.765 fio-3.35 00:09:27.765 Starting 4 threads 00:09:29.138 00:09:29.138 job0: (groupid=0, jobs=1): err= 0: pid=67674: Thu Jul 25 13:05:20 2024 00:09:29.138 read: IOPS=2895, BW=11.3MiB/s (11.9MB/s)(11.3MiB/1001msec) 00:09:29.138 slat (usec): min=8, max=154, avg=13.90, stdev= 3.81 00:09:29.138 clat (usec): min=55, max=919, avg=170.90, stdev=34.80 00:09:29.138 lat (usec): min=139, max=942, avg=184.79, stdev=34.63 00:09:29.138 clat percentiles (usec): 00:09:29.138 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:09:29.138 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:09:29.138 | 70.00th=[ 172], 80.00th=[ 180], 90.00th=[ 212], 95.00th=[ 241], 00:09:29.138 | 99.00th=[ 297], 99.50th=[ 330], 99.90th=[ 400], 99.95th=[ 449], 00:09:29.138 | 99.99th=[ 922] 00:09:29.138 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:29.138 slat (usec): min=15, max=108, avg=21.77, stdev= 5.56 00:09:29.138 clat (usec): min=68, max=705, avg=126.03, stdev=24.52 00:09:29.138 lat (usec): min=113, max=723, avg=147.80, stdev=25.54 00:09:29.138 clat percentiles (usec): 00:09:29.138 | 1.00th=[ 101], 5.00th=[ 108], 10.00th=[ 111], 20.00th=[ 114], 00:09:29.138 | 30.00th=[ 116], 40.00th=[ 119], 50.00th=[ 121], 60.00th=[ 124], 00:09:29.138 | 70.00th=[ 127], 80.00th=[ 133], 90.00th=[ 145], 95.00th=[ 178], 00:09:29.138 | 99.00th=[ 221], 99.50th=[ 235], 99.90th=[ 249], 99.95th=[ 465], 00:09:29.138 | 99.99th=[ 709] 00:09:29.138 bw ( KiB/s): min=12544, max=12544, per=30.64%, avg=12544.00, stdev= 0.00, samples=1 00:09:29.138 iops : min= 3136, max= 3136, avg=3136.00, stdev= 0.00, samples=1 00:09:29.138 lat (usec) : 100=0.44%, 250=97.74%, 500=1.79%, 750=0.02%, 1000=0.02% 00:09:29.138 cpu : usr=2.50%, sys=8.50%, ctx=5974, majf=0, minf=9 00:09:29.139 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:29.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.139 issued rwts: total=2898,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.139 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:29.139 job1: (groupid=0, jobs=1): err= 0: pid=67675: Thu Jul 25 13:05:20 2024 00:09:29.139 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:29.139 slat (nsec): min=10449, max=49426, avg=14316.73, stdev=3076.68 00:09:29.139 clat (usec): min=164, max=547, avg=251.24, stdev=22.08 00:09:29.139 lat (usec): min=179, max=562, avg=265.55, stdev=22.15 00:09:29.139 clat percentiles (usec): 00:09:29.139 | 1.00th=[ 217], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 235], 00:09:29.139 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:09:29.139 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 285], 00:09:29.139 | 99.00th=[ 314], 99.50th=[ 343], 99.90th=[ 404], 99.95th=[ 457], 00:09:29.139 | 99.99th=[ 545] 00:09:29.139 write: IOPS=2048, BW=8196KiB/s (8393kB/s)(8204KiB/1001msec); 0 zone resets 00:09:29.139 slat (nsec): min=11979, max=90793, avg=18374.51, stdev=4742.34 00:09:29.139 clat (usec): min=120, max=1562, avg=200.77, stdev=37.60 00:09:29.139 lat (usec): min=147, max=1578, avg=219.14, stdev=38.24 00:09:29.139 clat percentiles (usec): 00:09:29.139 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 186], 00:09:29.139 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 202], 00:09:29.139 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 221], 95.00th=[ 229], 00:09:29.139 | 99.00th=[ 251], 99.50th=[ 273], 99.90th=[ 553], 99.95th=[ 668], 00:09:29.139 | 99.99th=[ 1565] 00:09:29.139 bw ( KiB/s): min= 8192, max= 8192, per=20.01%, avg=8192.00, stdev= 0.00, samples=1 00:09:29.139 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:29.139 lat (usec) : 250=75.92%, 500=23.98%, 750=0.07% 00:09:29.139 lat (msec) : 2=0.02% 00:09:29.139 cpu : usr=2.00%, sys=5.20%, ctx=4100, majf=0, minf=12 00:09:29.139 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:29.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.139 issued rwts: total=2048,2051,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.139 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:29.139 job2: (groupid=0, jobs=1): err= 0: pid=67676: Thu Jul 25 13:05:20 2024 00:09:29.139 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:29.139 slat (nsec): min=9052, max=50576, avg=13704.31, stdev=4865.19 00:09:29.139 clat (usec): min=212, max=613, avg=252.10, stdev=21.81 00:09:29.139 lat (usec): min=222, max=647, avg=265.81, stdev=22.60 00:09:29.139 clat percentiles (usec): 00:09:29.139 | 1.00th=[ 219], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 237], 00:09:29.139 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 255], 00:09:29.139 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 285], 00:09:29.139 | 99.00th=[ 318], 99.50th=[ 347], 99.90th=[ 408], 99.95th=[ 461], 00:09:29.139 | 99.99th=[ 611] 00:09:29.139 write: IOPS=2047, BW=8192KiB/s (8388kB/s)(8200KiB/1001msec); 0 zone resets 00:09:29.139 slat (usec): min=15, max=609, avg=23.85, stdev=14.11 00:09:29.139 clat (usec): min=3, max=1640, avg=194.96, stdev=37.17 00:09:29.139 lat (usec): min=151, max=1660, avg=218.82, stdev=38.53 00:09:29.139 clat percentiles (usec): 00:09:29.139 | 1.00th=[ 163], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 182], 00:09:29.139 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 192], 60.00th=[ 196], 00:09:29.139 | 70.00th=[ 200], 80.00th=[ 206], 90.00th=[ 215], 95.00th=[ 223], 00:09:29.139 | 99.00th=[ 245], 99.50th=[ 260], 99.90th=[ 392], 99.95th=[ 482], 00:09:29.139 | 99.99th=[ 1647] 00:09:29.139 bw ( KiB/s): min= 8192, max= 8192, per=20.01%, avg=8192.00, stdev= 0.00, samples=1 00:09:29.139 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:29.139 lat (usec) : 4=0.02%, 250=75.43%, 500=24.50%, 750=0.02% 00:09:29.139 lat (msec) : 2=0.02% 00:09:29.139 cpu : usr=1.30%, sys=6.80%, ctx=4101, majf=0, minf=7 00:09:29.139 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:29.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.139 issued rwts: total=2048,2050,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.139 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:29.139 job3: (groupid=0, jobs=1): err= 0: pid=67677: Thu Jul 25 13:05:20 2024 00:09:29.139 read: IOPS=2669, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1001msec) 00:09:29.139 slat (nsec): min=11079, max=63087, avg=13809.71, stdev=2904.95 00:09:29.139 clat (usec): min=137, max=1650, avg=172.54, stdev=39.06 00:09:29.139 lat (usec): min=150, max=1663, avg=186.35, stdev=39.29 00:09:29.139 clat percentiles (usec): 00:09:29.139 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:09:29.139 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:09:29.139 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 196], 95.00th=[ 212], 00:09:29.139 | 99.00th=[ 265], 99.50th=[ 322], 99.90th=[ 506], 99.95th=[ 529], 00:09:29.139 | 99.99th=[ 1647] 00:09:29.139 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:29.139 slat (nsec): min=10700, max=86316, avg=20938.28, stdev=4894.17 00:09:29.139 clat (usec): min=98, max=843, avg=139.40, stdev=35.65 00:09:29.139 lat (usec): min=117, max=862, avg=160.34, stdev=35.26 00:09:29.139 clat percentiles (usec): 00:09:29.139 | 1.00th=[ 105], 5.00th=[ 112], 10.00th=[ 115], 20.00th=[ 120], 00:09:29.139 | 30.00th=[ 123], 40.00th=[ 127], 50.00th=[ 129], 60.00th=[ 133], 00:09:29.139 | 70.00th=[ 139], 80.00th=[ 147], 90.00th=[ 192], 95.00th=[ 217], 00:09:29.139 | 99.00th=[ 249], 99.50th=[ 260], 99.90th=[ 469], 99.95th=[ 545], 00:09:29.139 | 99.99th=[ 840] 00:09:29.139 bw ( KiB/s): min=12288, max=12288, per=30.02%, avg=12288.00, stdev= 0.00, samples=1 00:09:29.139 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:29.139 lat (usec) : 100=0.02%, 250=98.54%, 500=1.34%, 750=0.07%, 1000=0.02% 00:09:29.139 lat (msec) : 2=0.02% 00:09:29.139 cpu : usr=2.00%, sys=8.30%, ctx=5745, majf=0, minf=17 00:09:29.139 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:29.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.139 issued rwts: total=2672,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.139 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:29.139 00:09:29.139 Run status group 0 (all jobs): 00:09:29.139 READ: bw=37.7MiB/s (39.6MB/s), 8184KiB/s-11.3MiB/s (8380kB/s-11.9MB/s), io=37.8MiB (39.6MB), run=1001-1001msec 00:09:29.139 WRITE: bw=40.0MiB/s (41.9MB/s), 8192KiB/s-12.0MiB/s (8388kB/s-12.6MB/s), io=40.0MiB (42.0MB), run=1001-1001msec 00:09:29.139 00:09:29.139 Disk stats (read/write): 00:09:29.139 nvme0n1: ios=2610/2847, merge=0/0, ticks=452/377, in_queue=829, util=88.68% 00:09:29.139 nvme0n2: ios=1606/2048, merge=0/0, ticks=404/365, in_queue=769, util=89.08% 00:09:29.139 nvme0n3: ios=1556/2048, merge=0/0, ticks=379/409, in_queue=788, util=89.09% 00:09:29.139 nvme0n4: ios=2560/2587, merge=0/0, ticks=447/354, in_queue=801, util=89.75% 00:09:29.139 13:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:29.139 [global] 00:09:29.139 thread=1 00:09:29.139 invalidate=1 00:09:29.139 rw=write 00:09:29.139 time_based=1 00:09:29.139 runtime=1 00:09:29.139 ioengine=libaio 00:09:29.139 direct=1 00:09:29.139 bs=4096 00:09:29.139 iodepth=128 00:09:29.139 norandommap=0 00:09:29.139 numjobs=1 00:09:29.139 00:09:29.139 verify_dump=1 00:09:29.139 verify_backlog=512 00:09:29.139 verify_state_save=0 00:09:29.139 do_verify=1 00:09:29.139 verify=crc32c-intel 00:09:29.139 [job0] 00:09:29.139 filename=/dev/nvme0n1 00:09:29.139 [job1] 00:09:29.139 filename=/dev/nvme0n2 00:09:29.139 [job2] 00:09:29.139 filename=/dev/nvme0n3 00:09:29.139 [job3] 00:09:29.139 filename=/dev/nvme0n4 00:09:29.139 Could not set queue depth (nvme0n1) 00:09:29.139 Could not set queue depth (nvme0n2) 00:09:29.139 Could not set queue depth (nvme0n3) 00:09:29.139 Could not set queue depth (nvme0n4) 00:09:29.139 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:29.139 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:29.139 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:29.139 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:29.139 fio-3.35 00:09:29.139 Starting 4 threads 00:09:30.514 00:09:30.514 job0: (groupid=0, jobs=1): err= 0: pid=67731: Thu Jul 25 13:05:22 2024 00:09:30.514 read: IOPS=4169, BW=16.3MiB/s (17.1MB/s)(16.3MiB/1002msec) 00:09:30.514 slat (usec): min=4, max=6812, avg=115.11, stdev=516.27 00:09:30.514 clat (usec): min=1190, max=25602, avg=14850.09, stdev=3987.31 00:09:30.514 lat (usec): min=2972, max=25617, avg=14965.20, stdev=3988.33 00:09:30.514 clat percentiles (usec): 00:09:30.514 | 1.00th=[ 9110], 5.00th=[10814], 10.00th=[11207], 20.00th=[11338], 00:09:30.514 | 30.00th=[11469], 40.00th=[11600], 50.00th=[12125], 60.00th=[16909], 00:09:30.514 | 70.00th=[18744], 80.00th=[19268], 90.00th=[19530], 95.00th=[20317], 00:09:30.514 | 99.00th=[22938], 99.50th=[23462], 99.90th=[24249], 99.95th=[25560], 00:09:30.514 | 99.99th=[25560] 00:09:30.514 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:09:30.514 slat (usec): min=8, max=5140, avg=105.42, stdev=484.73 00:09:30.514 clat (usec): min=8274, max=24687, avg=14012.21, stdev=3763.54 00:09:30.514 lat (usec): min=9040, max=24702, avg=14117.63, stdev=3763.24 00:09:30.514 clat percentiles (usec): 00:09:30.514 | 1.00th=[ 9110], 5.00th=[10421], 10.00th=[10683], 20.00th=[10814], 00:09:30.514 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[14746], 00:09:30.514 | 70.00th=[17695], 80.00th=[18220], 90.00th=[19006], 95.00th=[19530], 00:09:30.514 | 99.00th=[22938], 99.50th=[23725], 99.90th=[24773], 99.95th=[24773], 00:09:30.514 | 99.99th=[24773] 00:09:30.514 bw ( KiB/s): min=21384, max=21384, per=35.35%, avg=21384.00, stdev= 0.00, samples=1 00:09:30.514 iops : min= 5346, max= 5346, avg=5346.00, stdev= 0.00, samples=1 00:09:30.514 lat (msec) : 2=0.01%, 4=0.16%, 10=2.06%, 20=92.81%, 50=4.96% 00:09:30.514 cpu : usr=4.50%, sys=11.59%, ctx=502, majf=0, minf=12 00:09:30.514 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:30.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:30.514 issued rwts: total=4178,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:30.514 job1: (groupid=0, jobs=1): err= 0: pid=67732: Thu Jul 25 13:05:22 2024 00:09:30.514 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:09:30.514 slat (usec): min=4, max=7217, avg=115.56, stdev=460.53 00:09:30.514 clat (usec): min=7709, max=29033, avg=15401.56, stdev=4645.57 00:09:30.514 lat (usec): min=8690, max=29055, avg=15517.11, stdev=4666.96 00:09:30.514 clat percentiles (usec): 00:09:30.514 | 1.00th=[ 9110], 5.00th=[10159], 10.00th=[10814], 20.00th=[11076], 00:09:30.514 | 30.00th=[11338], 40.00th=[11600], 50.00th=[12911], 60.00th=[18744], 00:09:30.514 | 70.00th=[19268], 80.00th=[19530], 90.00th=[20317], 95.00th=[22676], 00:09:30.514 | 99.00th=[26084], 99.50th=[27919], 99.90th=[28967], 99.95th=[28967], 00:09:30.514 | 99.99th=[28967] 00:09:30.514 write: IOPS=4547, BW=17.8MiB/s (18.6MB/s)(17.8MiB/1002msec); 0 zone resets 00:09:30.514 slat (usec): min=10, max=7142, avg=108.65, stdev=555.53 00:09:30.514 clat (usec): min=1073, max=22960, avg=13960.46, stdev=3885.97 00:09:30.514 lat (usec): min=1110, max=22981, avg=14069.10, stdev=3910.23 00:09:30.515 clat percentiles (usec): 00:09:30.515 | 1.00th=[ 5735], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10421], 00:09:30.515 | 30.00th=[10683], 40.00th=[11076], 50.00th=[13042], 60.00th=[16057], 00:09:30.515 | 70.00th=[17957], 80.00th=[18220], 90.00th=[18744], 95.00th=[19268], 00:09:30.515 | 99.00th=[20841], 99.50th=[21627], 99.90th=[22938], 99.95th=[22938], 00:09:30.515 | 99.99th=[22938] 00:09:30.515 bw ( KiB/s): min=14728, max=21080, per=29.59%, avg=17904.00, stdev=4491.54, samples=2 00:09:30.515 iops : min= 3684, max= 5270, avg=4477.00, stdev=1121.47, samples=2 00:09:30.515 lat (msec) : 2=0.15%, 4=0.16%, 10=5.73%, 20=85.61%, 50=8.34% 00:09:30.515 cpu : usr=3.70%, sys=11.59%, ctx=554, majf=0, minf=5 00:09:30.515 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:30.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:30.515 issued rwts: total=4096,4557,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:30.515 job2: (groupid=0, jobs=1): err= 0: pid=67733: Thu Jul 25 13:05:22 2024 00:09:30.515 read: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec) 00:09:30.515 slat (usec): min=6, max=8166, avg=178.25, stdev=797.18 00:09:30.515 clat (usec): min=14320, max=43125, avg=22712.62, stdev=5069.64 00:09:30.515 lat (usec): min=14346, max=43150, avg=22890.87, stdev=5104.02 00:09:30.515 clat percentiles (usec): 00:09:30.515 | 1.00th=[15008], 5.00th=[16450], 10.00th=[17695], 20.00th=[18482], 00:09:30.515 | 30.00th=[19006], 40.00th=[19530], 50.00th=[21890], 60.00th=[22676], 00:09:30.515 | 70.00th=[25560], 80.00th=[27395], 90.00th=[30540], 95.00th=[31065], 00:09:30.515 | 99.00th=[34341], 99.50th=[40109], 99.90th=[43254], 99.95th=[43254], 00:09:30.515 | 99.99th=[43254] 00:09:30.515 write: IOPS=3008, BW=11.8MiB/s (12.3MB/s)(11.9MiB/1010msec); 0 zone resets 00:09:30.515 slat (usec): min=14, max=10362, avg=170.79, stdev=759.07 00:09:30.515 clat (usec): min=7878, max=59007, avg=22763.13, stdev=11759.11 00:09:30.515 lat (usec): min=10048, max=59035, avg=22933.92, stdev=11841.71 00:09:30.515 clat percentiles (usec): 00:09:30.515 | 1.00th=[11207], 5.00th=[12125], 10.00th=[12387], 20.00th=[14222], 00:09:30.515 | 30.00th=[14746], 40.00th=[15139], 50.00th=[17957], 60.00th=[18744], 00:09:30.515 | 70.00th=[25822], 80.00th=[32900], 90.00th=[42730], 95.00th=[47449], 00:09:30.515 | 99.00th=[52167], 99.50th=[53216], 99.90th=[58983], 99.95th=[58983], 00:09:30.515 | 99.99th=[58983] 00:09:30.515 bw ( KiB/s): min=11000, max=12288, per=19.25%, avg=11644.00, stdev=910.75, samples=2 00:09:30.515 iops : min= 2750, max= 3072, avg=2911.00, stdev=227.69, samples=2 00:09:30.515 lat (msec) : 10=0.04%, 20=54.28%, 50=44.47%, 100=1.21% 00:09:30.515 cpu : usr=2.68%, sys=9.71%, ctx=269, majf=0, minf=9 00:09:30.515 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:30.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:30.515 issued rwts: total=2560,3039,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:30.515 job3: (groupid=0, jobs=1): err= 0: pid=67734: Thu Jul 25 13:05:22 2024 00:09:30.515 read: IOPS=2742, BW=10.7MiB/s (11.2MB/s)(10.8MiB/1004msec) 00:09:30.515 slat (usec): min=4, max=14531, avg=200.63, stdev=1173.38 00:09:30.515 clat (usec): min=1290, max=59313, avg=25191.35, stdev=11748.21 00:09:30.515 lat (usec): min=4194, max=59330, avg=25391.98, stdev=11780.94 00:09:30.515 clat percentiles (usec): 00:09:30.515 | 1.00th=[ 4817], 5.00th=[13698], 10.00th=[15664], 20.00th=[16188], 00:09:30.515 | 30.00th=[16450], 40.00th=[17171], 50.00th=[22676], 60.00th=[25560], 00:09:30.515 | 70.00th=[26870], 80.00th=[30802], 90.00th=[43779], 95.00th=[52167], 00:09:30.515 | 99.00th=[58983], 99.50th=[59507], 99.90th=[59507], 99.95th=[59507], 00:09:30.515 | 99.99th=[59507] 00:09:30.515 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:09:30.515 slat (usec): min=12, max=11506, avg=138.21, stdev=708.26 00:09:30.515 clat (usec): min=10238, max=42879, avg=18533.27, stdev=6920.53 00:09:30.515 lat (usec): min=12371, max=42922, avg=18671.48, stdev=6923.32 00:09:30.515 clat percentiles (usec): 00:09:30.515 | 1.00th=[10814], 5.00th=[12780], 10.00th=[12911], 20.00th=[13042], 00:09:30.515 | 30.00th=[13304], 40.00th=[13829], 50.00th=[17171], 60.00th=[18220], 00:09:30.515 | 70.00th=[20317], 80.00th=[21365], 90.00th=[29230], 95.00th=[35914], 00:09:30.515 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:09:30.515 | 99.99th=[42730] 00:09:30.515 bw ( KiB/s): min= 8456, max=16120, per=20.31%, avg=12288.00, stdev=5419.27, samples=2 00:09:30.515 iops : min= 2114, max= 4030, avg=3072.00, stdev=1354.82, samples=2 00:09:30.515 lat (msec) : 2=0.02%, 10=1.10%, 20=54.56%, 50=41.13%, 100=3.19% 00:09:30.515 cpu : usr=2.19%, sys=9.87%, ctx=188, majf=0, minf=11 00:09:30.515 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:09:30.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:30.515 issued rwts: total=2753,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:30.515 00:09:30.515 Run status group 0 (all jobs): 00:09:30.515 READ: bw=52.5MiB/s (55.1MB/s), 9.90MiB/s-16.3MiB/s (10.4MB/s-17.1MB/s), io=53.1MiB (55.7MB), run=1002-1010msec 00:09:30.515 WRITE: bw=59.1MiB/s (61.9MB/s), 11.8MiB/s-18.0MiB/s (12.3MB/s-18.8MB/s), io=59.7MiB (62.6MB), run=1002-1010msec 00:09:30.515 00:09:30.515 Disk stats (read/write): 00:09:30.515 nvme0n1: ios=3836/4096, merge=0/0, ticks=12598/11509, in_queue=24107, util=89.11% 00:09:30.515 nvme0n2: ios=3760/4096, merge=0/0, ticks=18480/17138, in_queue=35618, util=88.83% 00:09:30.515 nvme0n3: ios=2541/2560, merge=0/0, ticks=28571/23094, in_queue=51665, util=89.58% 00:09:30.515 nvme0n4: ios=2144/2560, merge=0/0, ticks=14905/10818, in_queue=25723, util=89.94% 00:09:30.515 13:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:30.515 [global] 00:09:30.515 thread=1 00:09:30.515 invalidate=1 00:09:30.515 rw=randwrite 00:09:30.515 time_based=1 00:09:30.515 runtime=1 00:09:30.515 ioengine=libaio 00:09:30.515 direct=1 00:09:30.515 bs=4096 00:09:30.515 iodepth=128 00:09:30.515 norandommap=0 00:09:30.515 numjobs=1 00:09:30.515 00:09:30.515 verify_dump=1 00:09:30.515 verify_backlog=512 00:09:30.515 verify_state_save=0 00:09:30.515 do_verify=1 00:09:30.515 verify=crc32c-intel 00:09:30.515 [job0] 00:09:30.515 filename=/dev/nvme0n1 00:09:30.515 [job1] 00:09:30.515 filename=/dev/nvme0n2 00:09:30.515 [job2] 00:09:30.515 filename=/dev/nvme0n3 00:09:30.515 [job3] 00:09:30.515 filename=/dev/nvme0n4 00:09:30.515 Could not set queue depth (nvme0n1) 00:09:30.515 Could not set queue depth (nvme0n2) 00:09:30.515 Could not set queue depth (nvme0n3) 00:09:30.515 Could not set queue depth (nvme0n4) 00:09:30.515 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:30.515 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:30.515 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:30.515 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:30.515 fio-3.35 00:09:30.515 Starting 4 threads 00:09:31.888 00:09:31.888 job0: (groupid=0, jobs=1): err= 0: pid=67787: Thu Jul 25 13:05:23 2024 00:09:31.888 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:09:31.888 slat (usec): min=9, max=10450, avg=146.27, stdev=963.51 00:09:31.888 clat (usec): min=12383, max=32363, avg=20236.83, stdev=2194.29 00:09:31.888 lat (usec): min=12400, max=39307, avg=20383.10, stdev=2245.85 00:09:31.888 clat percentiles (usec): 00:09:31.888 | 1.00th=[12649], 5.00th=[17957], 10.00th=[19530], 20.00th=[19792], 00:09:31.888 | 30.00th=[19792], 40.00th=[20055], 50.00th=[20055], 60.00th=[20317], 00:09:31.888 | 70.00th=[20579], 80.00th=[21103], 90.00th=[21627], 95.00th=[22152], 00:09:31.888 | 99.00th=[30802], 99.50th=[32113], 99.90th=[32375], 99.95th=[32375], 00:09:31.888 | 99.99th=[32375] 00:09:31.888 write: IOPS=3525, BW=13.8MiB/s (14.4MB/s)(13.8MiB/1003msec); 0 zone resets 00:09:31.888 slat (usec): min=9, max=16500, avg=147.45, stdev=939.34 00:09:31.888 clat (usec): min=2103, max=28391, avg=18342.28, stdev=2499.11 00:09:31.888 lat (usec): min=2132, max=28422, avg=18489.73, stdev=2361.53 00:09:31.888 clat percentiles (usec): 00:09:31.888 | 1.00th=[10290], 5.00th=[15664], 10.00th=[16712], 20.00th=[17695], 00:09:31.888 | 30.00th=[17957], 40.00th=[18220], 50.00th=[18482], 60.00th=[18744], 00:09:31.888 | 70.00th=[19006], 80.00th=[19268], 90.00th=[19792], 95.00th=[20317], 00:09:31.888 | 99.00th=[27919], 99.50th=[28181], 99.90th=[28443], 99.95th=[28443], 00:09:31.888 | 99.99th=[28443] 00:09:31.888 bw ( KiB/s): min=13330, max=13940, per=25.72%, avg=13635.00, stdev=431.34, samples=2 00:09:31.888 iops : min= 3332, max= 3485, avg=3408.50, stdev=108.19, samples=2 00:09:31.888 lat (msec) : 4=0.27%, 10=0.08%, 20=70.73%, 50=28.92% 00:09:31.888 cpu : usr=2.99%, sys=10.88%, ctx=180, majf=0, minf=15 00:09:31.888 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:09:31.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:31.888 issued rwts: total=3072,3536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.888 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:31.888 job1: (groupid=0, jobs=1): err= 0: pid=67788: Thu Jul 25 13:05:23 2024 00:09:31.888 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:09:31.888 slat (usec): min=5, max=6578, avg=163.48, stdev=596.95 00:09:31.888 clat (usec): min=10742, max=29021, avg=20534.11, stdev=2567.93 00:09:31.888 lat (usec): min=10751, max=29049, avg=20697.59, stdev=2579.07 00:09:31.888 clat percentiles (usec): 00:09:31.888 | 1.00th=[12780], 5.00th=[15795], 10.00th=[17695], 20.00th=[19268], 00:09:31.888 | 30.00th=[20055], 40.00th=[20317], 50.00th=[20579], 60.00th=[20579], 00:09:31.888 | 70.00th=[20841], 80.00th=[22152], 90.00th=[23725], 95.00th=[25035], 00:09:31.888 | 99.00th=[27657], 99.50th=[27919], 99.90th=[28443], 99.95th=[28967], 00:09:31.888 | 99.99th=[28967] 00:09:31.888 write: IOPS=3115, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1004msec); 0 zone resets 00:09:31.888 slat (usec): min=4, max=11670, avg=151.86, stdev=611.20 00:09:31.888 clat (usec): min=494, max=29285, avg=20128.97, stdev=3411.11 00:09:31.888 lat (usec): min=6921, max=29662, avg=20280.82, stdev=3407.03 00:09:31.888 clat percentiles (usec): 00:09:31.888 | 1.00th=[ 8291], 5.00th=[14877], 10.00th=[15795], 20.00th=[18220], 00:09:31.888 | 30.00th=[18744], 40.00th=[19268], 50.00th=[20055], 60.00th=[21365], 00:09:31.888 | 70.00th=[21890], 80.00th=[22152], 90.00th=[23200], 95.00th=[26084], 00:09:31.888 | 99.00th=[28705], 99.50th=[28967], 99.90th=[29230], 99.95th=[29230], 00:09:31.888 | 99.99th=[29230] 00:09:31.888 bw ( KiB/s): min=12263, max=12288, per=23.16%, avg=12275.50, stdev=17.68, samples=2 00:09:31.888 iops : min= 3065, max= 3072, avg=3068.50, stdev= 4.95, samples=2 00:09:31.888 lat (usec) : 500=0.02% 00:09:31.888 lat (msec) : 10=0.90%, 20=39.10%, 50=59.98% 00:09:31.888 cpu : usr=3.19%, sys=9.07%, ctx=921, majf=0, minf=12 00:09:31.888 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:31.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:31.889 issued rwts: total=3072,3128,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.889 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:31.889 job2: (groupid=0, jobs=1): err= 0: pid=67789: Thu Jul 25 13:05:23 2024 00:09:31.889 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:09:31.889 slat (usec): min=4, max=6770, avg=166.10, stdev=579.27 00:09:31.889 clat (usec): min=10023, max=31064, avg=20824.11, stdev=2916.99 00:09:31.889 lat (usec): min=11359, max=31090, avg=20990.21, stdev=2929.71 00:09:31.889 clat percentiles (usec): 00:09:31.889 | 1.00th=[14353], 5.00th=[15926], 10.00th=[17695], 20.00th=[19530], 00:09:31.889 | 30.00th=[20055], 40.00th=[20317], 50.00th=[20579], 60.00th=[20841], 00:09:31.889 | 70.00th=[21103], 80.00th=[22414], 90.00th=[24249], 95.00th=[27132], 00:09:31.889 | 99.00th=[29230], 99.50th=[29754], 99.90th=[31065], 99.95th=[31065], 00:09:31.889 | 99.99th=[31065] 00:09:31.889 write: IOPS=3148, BW=12.3MiB/s (12.9MB/s)(12.4MiB/1008msec); 0 zone resets 00:09:31.889 slat (usec): min=4, max=9834, avg=146.52, stdev=572.11 00:09:31.889 clat (usec): min=7333, max=30021, avg=20027.87, stdev=3968.03 00:09:31.889 lat (usec): min=7340, max=30766, avg=20174.39, stdev=4001.98 00:09:31.889 clat percentiles (usec): 00:09:31.889 | 1.00th=[ 8848], 5.00th=[11469], 10.00th=[14222], 20.00th=[18744], 00:09:31.889 | 30.00th=[19006], 40.00th=[19530], 50.00th=[20579], 60.00th=[21627], 00:09:31.889 | 70.00th=[22152], 80.00th=[22414], 90.00th=[23725], 95.00th=[25035], 00:09:31.889 | 99.00th=[28967], 99.50th=[29492], 99.90th=[29754], 99.95th=[30016], 00:09:31.889 | 99.99th=[30016] 00:09:31.889 bw ( KiB/s): min=12263, max=12312, per=23.18%, avg=12287.50, stdev=34.65, samples=2 00:09:31.889 iops : min= 3065, max= 3078, avg=3071.50, stdev= 9.19, samples=2 00:09:31.889 lat (msec) : 10=1.39%, 20=36.54%, 50=62.07% 00:09:31.889 cpu : usr=2.58%, sys=9.14%, ctx=1016, majf=0, minf=7 00:09:31.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:31.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:31.889 issued rwts: total=3072,3174,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.889 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:31.889 job3: (groupid=0, jobs=1): err= 0: pid=67790: Thu Jul 25 13:05:23 2024 00:09:31.889 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:09:31.889 slat (usec): min=8, max=10764, avg=146.25, stdev=959.47 00:09:31.889 clat (usec): min=12362, max=32360, avg=20240.77, stdev=2189.40 00:09:31.889 lat (usec): min=12383, max=39307, avg=20387.02, stdev=2248.00 00:09:31.889 clat percentiles (usec): 00:09:31.889 | 1.00th=[12649], 5.00th=[18220], 10.00th=[19268], 20.00th=[19792], 00:09:31.889 | 30.00th=[19792], 40.00th=[20055], 50.00th=[20055], 60.00th=[20317], 00:09:31.889 | 70.00th=[20579], 80.00th=[21103], 90.00th=[21627], 95.00th=[22152], 00:09:31.889 | 99.00th=[30802], 99.50th=[32113], 99.90th=[32375], 99.95th=[32375], 00:09:31.889 | 99.99th=[32375] 00:09:31.889 write: IOPS=3508, BW=13.7MiB/s (14.4MB/s)(13.7MiB/1003msec); 0 zone resets 00:09:31.889 slat (usec): min=14, max=17088, avg=148.19, stdev=944.63 00:09:31.889 clat (usec): min=1990, max=28874, avg=18430.43, stdev=2350.61 00:09:31.889 lat (usec): min=9240, max=28902, avg=18578.62, stdev=2196.85 00:09:31.889 clat percentiles (usec): 00:09:31.889 | 1.00th=[10159], 5.00th=[15926], 10.00th=[16581], 20.00th=[17433], 00:09:31.889 | 30.00th=[18220], 40.00th=[18220], 50.00th=[18482], 60.00th=[18744], 00:09:31.889 | 70.00th=[19006], 80.00th=[19268], 90.00th=[19792], 95.00th=[20317], 00:09:31.889 | 99.00th=[28705], 99.50th=[28705], 99.90th=[28967], 99.95th=[28967], 00:09:31.889 | 99.99th=[28967] 00:09:31.889 bw ( KiB/s): min=13304, max=13804, per=25.57%, avg=13554.00, stdev=353.55, samples=2 00:09:31.889 iops : min= 3326, max= 3451, avg=3388.50, stdev=88.39, samples=2 00:09:31.889 lat (msec) : 2=0.02%, 10=0.39%, 20=68.91%, 50=30.68% 00:09:31.889 cpu : usr=2.79%, sys=11.08%, ctx=141, majf=0, minf=11 00:09:31.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:09:31.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:31.889 issued rwts: total=3072,3519,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.889 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:31.889 00:09:31.889 Run status group 0 (all jobs): 00:09:31.889 READ: bw=47.6MiB/s (49.9MB/s), 11.9MiB/s-12.0MiB/s (12.5MB/s-12.5MB/s), io=48.0MiB (50.3MB), run=1003-1008msec 00:09:31.889 WRITE: bw=51.8MiB/s (54.3MB/s), 12.2MiB/s-13.8MiB/s (12.8MB/s-14.4MB/s), io=52.2MiB (54.7MB), run=1003-1008msec 00:09:31.889 00:09:31.889 Disk stats (read/write): 00:09:31.889 nvme0n1: ios=2610/3008, merge=0/0, ticks=49740/52500, in_queue=102240, util=88.18% 00:09:31.889 nvme0n2: ios=2609/2741, merge=0/0, ticks=25731/25372, in_queue=51103, util=87.77% 00:09:31.889 nvme0n3: ios=2560/2786, merge=0/0, ticks=26641/24717, in_queue=51358, util=88.47% 00:09:31.889 nvme0n4: ios=2560/3008, merge=0/0, ticks=49777/52659, in_queue=102436, util=89.94% 00:09:31.889 13:05:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:31.889 13:05:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=67803 00:09:31.889 13:05:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:31.889 13:05:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:31.889 [global] 00:09:31.889 thread=1 00:09:31.889 invalidate=1 00:09:31.889 rw=read 00:09:31.889 time_based=1 00:09:31.889 runtime=10 00:09:31.889 ioengine=libaio 00:09:31.889 direct=1 00:09:31.889 bs=4096 00:09:31.889 iodepth=1 00:09:31.889 norandommap=1 00:09:31.889 numjobs=1 00:09:31.889 00:09:31.889 [job0] 00:09:31.889 filename=/dev/nvme0n1 00:09:31.889 [job1] 00:09:31.889 filename=/dev/nvme0n2 00:09:31.889 [job2] 00:09:31.889 filename=/dev/nvme0n3 00:09:31.889 [job3] 00:09:31.889 filename=/dev/nvme0n4 00:09:31.889 Could not set queue depth (nvme0n1) 00:09:31.889 Could not set queue depth (nvme0n2) 00:09:31.889 Could not set queue depth (nvme0n3) 00:09:31.889 Could not set queue depth (nvme0n4) 00:09:31.889 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.889 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.889 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.889 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.889 fio-3.35 00:09:31.889 Starting 4 threads 00:09:35.174 13:05:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:35.174 fio: pid=67857, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:35.174 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=45711360, buflen=4096 00:09:35.174 13:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:35.174 fio: pid=67856, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:35.174 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=70451200, buflen=4096 00:09:35.174 13:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:35.174 13:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:35.433 fio: pid=67853, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:35.433 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=55586816, buflen=4096 00:09:35.433 13:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:35.433 13:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:35.692 fio: pid=67854, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:35.692 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=20586496, buflen=4096 00:09:35.692 00:09:35.692 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=67853: Thu Jul 25 13:05:27 2024 00:09:35.692 read: IOPS=3984, BW=15.6MiB/s (16.3MB/s)(53.0MiB/3406msec) 00:09:35.692 slat (usec): min=8, max=13940, avg=14.75, stdev=189.65 00:09:35.692 clat (usec): min=125, max=3070, avg=235.08, stdev=57.50 00:09:35.692 lat (usec): min=138, max=14131, avg=249.83, stdev=197.99 00:09:35.692 clat percentiles (usec): 00:09:35.692 | 1.00th=[ 137], 5.00th=[ 147], 10.00th=[ 157], 20.00th=[ 223], 00:09:35.692 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 249], 00:09:35.692 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 273], 95.00th=[ 281], 00:09:35.692 | 99.00th=[ 302], 99.50th=[ 314], 99.90th=[ 553], 99.95th=[ 775], 00:09:35.692 | 99.99th=[ 2442] 00:09:35.692 bw ( KiB/s): min=15078, max=15392, per=22.11%, avg=15234.33, stdev=129.15, samples=6 00:09:35.692 iops : min= 3769, max= 3848, avg=3808.50, stdev=32.41, samples=6 00:09:35.692 lat (usec) : 250=61.16%, 500=38.72%, 750=0.07%, 1000=0.01% 00:09:35.692 lat (msec) : 2=0.02%, 4=0.02% 00:09:35.692 cpu : usr=0.97%, sys=4.20%, ctx=13578, majf=0, minf=1 00:09:35.692 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:35.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.692 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.692 issued rwts: total=13572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.692 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:35.692 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=67854: Thu Jul 25 13:05:27 2024 00:09:35.692 read: IOPS=5821, BW=22.7MiB/s (23.8MB/s)(83.6MiB/3678msec) 00:09:35.692 slat (usec): min=10, max=12834, avg=15.98, stdev=153.12 00:09:35.692 clat (usec): min=119, max=3473, avg=154.45, stdev=33.27 00:09:35.692 lat (usec): min=131, max=13039, avg=170.42, stdev=157.26 00:09:35.692 clat percentiles (usec): 00:09:35.692 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 143], 00:09:35.692 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 155], 00:09:35.692 | 70.00th=[ 159], 80.00th=[ 165], 90.00th=[ 174], 95.00th=[ 180], 00:09:35.692 | 99.00th=[ 194], 99.50th=[ 202], 99.90th=[ 367], 99.95th=[ 594], 00:09:35.692 | 99.99th=[ 1336] 00:09:35.692 bw ( KiB/s): min=22400, max=24144, per=33.87%, avg=23334.14, stdev=691.55, samples=7 00:09:35.692 iops : min= 5600, max= 6036, avg=5833.43, stdev=173.02, samples=7 00:09:35.692 lat (usec) : 250=99.87%, 500=0.06%, 750=0.03%, 1000=0.01% 00:09:35.692 lat (msec) : 2=0.02%, 4=0.01% 00:09:35.692 cpu : usr=1.60%, sys=7.07%, ctx=21418, majf=0, minf=1 00:09:35.692 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:35.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.692 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.692 issued rwts: total=21411,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.692 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:35.692 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=67856: Thu Jul 25 13:05:27 2024 00:09:35.692 read: IOPS=5407, BW=21.1MiB/s (22.1MB/s)(67.2MiB/3181msec) 00:09:35.692 slat (usec): min=11, max=11451, avg=15.10, stdev=105.62 00:09:35.692 clat (usec): min=134, max=2091, avg=168.44, stdev=32.22 00:09:35.692 lat (usec): min=145, max=11633, avg=183.53, stdev=110.68 00:09:35.692 clat percentiles (usec): 00:09:35.692 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:09:35.692 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 169], 00:09:35.692 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 188], 95.00th=[ 196], 00:09:35.693 | 99.00th=[ 219], 99.50th=[ 251], 99.90th=[ 537], 99.95th=[ 775], 00:09:35.693 | 99.99th=[ 1876] 00:09:35.693 bw ( KiB/s): min=20776, max=22104, per=31.43%, avg=21654.67, stdev=514.88, samples=6 00:09:35.693 iops : min= 5194, max= 5526, avg=5413.67, stdev=128.72, samples=6 00:09:35.693 lat (usec) : 250=99.49%, 500=0.39%, 750=0.06%, 1000=0.04% 00:09:35.693 lat (msec) : 2=0.01%, 4=0.01% 00:09:35.693 cpu : usr=1.60%, sys=6.76%, ctx=17203, majf=0, minf=1 00:09:35.693 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:35.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.693 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.693 issued rwts: total=17201,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.693 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:35.693 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=67857: Thu Jul 25 13:05:27 2024 00:09:35.693 read: IOPS=3792, BW=14.8MiB/s (15.5MB/s)(43.6MiB/2943msec) 00:09:35.693 slat (nsec): min=8505, max=94453, avg=14304.50, stdev=3941.06 00:09:35.693 clat (usec): min=199, max=2121, avg=248.00, stdev=30.87 00:09:35.693 lat (usec): min=212, max=2133, avg=262.30, stdev=31.16 00:09:35.693 clat percentiles (usec): 00:09:35.693 | 1.00th=[ 215], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 233], 00:09:35.693 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 251], 00:09:35.693 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 273], 95.00th=[ 281], 00:09:35.693 | 99.00th=[ 297], 99.50th=[ 310], 99.90th=[ 367], 99.95th=[ 578], 00:09:35.693 | 99.99th=[ 1434] 00:09:35.693 bw ( KiB/s): min=15056, max=15400, per=22.07%, avg=15206.40, stdev=136.63, samples=5 00:09:35.693 iops : min= 3764, max= 3850, avg=3801.60, stdev=34.16, samples=5 00:09:35.693 lat (usec) : 250=60.02%, 500=39.90%, 750=0.04%, 1000=0.01% 00:09:35.693 lat (msec) : 2=0.02%, 4=0.01% 00:09:35.693 cpu : usr=1.36%, sys=4.86%, ctx=11162, majf=0, minf=1 00:09:35.693 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:35.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.693 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.693 issued rwts: total=11161,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.693 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:35.693 00:09:35.693 Run status group 0 (all jobs): 00:09:35.693 READ: bw=67.3MiB/s (70.5MB/s), 14.8MiB/s-22.7MiB/s (15.5MB/s-23.8MB/s), io=247MiB (259MB), run=2943-3678msec 00:09:35.693 00:09:35.693 Disk stats (read/write): 00:09:35.693 nvme0n1: ios=13349/0, merge=0/0, ticks=2960/0, in_queue=2960, util=95.22% 00:09:35.693 nvme0n2: ios=21056/0, merge=0/0, ticks=3336/0, in_queue=3336, util=95.53% 00:09:35.693 nvme0n3: ios=16854/0, merge=0/0, ticks=2909/0, in_queue=2909, util=96.27% 00:09:35.693 nvme0n4: ios=10879/0, merge=0/0, ticks=2661/0, in_queue=2661, util=96.73% 00:09:35.693 13:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:35.693 13:05:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:35.952 13:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:35.952 13:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:36.210 13:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:36.210 13:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:36.469 13:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:36.469 13:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:36.727 13:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:36.727 13:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:36.986 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:36.986 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 67803 00:09:36.986 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:36.986 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:36.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.986 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:36.986 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:09:36.986 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:36.986 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:36.986 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:36.986 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:36.986 nvmf hotplug test: fio failed as expected 00:09:36.986 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:09:36.986 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:36.986 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:36.986 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:37.245 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:37.245 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:37.245 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:37.245 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:37.245 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:37.245 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:37.245 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:09:37.245 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:37.245 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:09:37.245 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:37.245 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:37.245 rmmod nvme_tcp 00:09:37.245 rmmod nvme_fabrics 00:09:37.245 rmmod nvme_keyring 00:09:37.504 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:37.504 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:09:37.504 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:09:37.504 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 67427 ']' 00:09:37.504 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 67427 00:09:37.504 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 67427 ']' 00:09:37.504 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 67427 00:09:37.504 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:09:37.504 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:37.504 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67427 00:09:37.504 killing process with pid 67427 00:09:37.504 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:37.504 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:37.504 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67427' 00:09:37.504 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 67427 00:09:37.504 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 67427 00:09:37.504 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:37.504 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:37.504 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:37.504 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:37.504 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:37.504 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.504 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.504 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.763 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:37.763 00:09:37.763 real 0m19.250s 00:09:37.763 user 1m12.131s 00:09:37.763 sys 0m10.618s 00:09:37.763 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:37.763 ************************************ 00:09:37.763 END TEST nvmf_fio_target 00:09:37.763 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.763 ************************************ 00:09:37.763 13:05:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:37.763 13:05:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:37.763 13:05:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:37.763 13:05:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:37.763 ************************************ 00:09:37.763 START TEST nvmf_bdevio 00:09:37.763 ************************************ 00:09:37.763 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:37.763 * Looking for test storage... 00:09:37.764 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:37.764 Cannot find device "nvmf_tgt_br" 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:37.764 Cannot find device "nvmf_tgt_br2" 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:37.764 Cannot find device "nvmf_tgt_br" 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:37.764 Cannot find device "nvmf_tgt_br2" 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:09:37.764 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:38.023 13:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:38.023 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:38.023 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:38.023 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:09:38.023 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:38.023 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:38.023 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:09:38.023 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:38.023 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:38.023 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:38.023 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:38.023 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:38.023 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:38.023 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:38.023 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:38.023 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:38.024 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:38.024 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:38.024 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:38.024 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:38.024 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:38.024 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:38.024 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:38.024 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:38.024 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:38.024 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:38.024 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:38.024 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:38.024 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:38.024 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:38.024 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:38.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:38.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:09:38.024 00:09:38.024 --- 10.0.0.2 ping statistics --- 00:09:38.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.024 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:09:38.024 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:38.024 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:38.024 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:09:38.024 00:09:38.024 --- 10.0.0.3 ping statistics --- 00:09:38.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.024 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:09:38.024 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:38.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:38.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:09:38.024 00:09:38.024 --- 10.0.0.1 ping statistics --- 00:09:38.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.024 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:09:38.024 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:38.024 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:09:38.024 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:38.024 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:38.024 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:38.024 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:38.024 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:38.024 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:38.024 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:38.024 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:38.024 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:38.024 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:38.024 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:38.283 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=68120 00:09:38.283 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:38.283 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 68120 00:09:38.283 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 68120 ']' 00:09:38.283 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.283 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:38.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.283 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.283 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:38.283 13:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:38.283 [2024-07-25 13:05:30.261930] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:38.283 [2024-07-25 13:05:30.262026] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.283 [2024-07-25 13:05:30.398994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:38.541 [2024-07-25 13:05:30.493127] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.541 [2024-07-25 13:05:30.493194] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.541 [2024-07-25 13:05:30.493220] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.541 [2024-07-25 13:05:30.493227] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.541 [2024-07-25 13:05:30.493234] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.541 [2024-07-25 13:05:30.493431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:38.541 [2024-07-25 13:05:30.493822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:38.541 [2024-07-25 13:05:30.493923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:38.541 [2024-07-25 13:05:30.493914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:38.541 [2024-07-25 13:05:30.546858] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:39.109 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:39.109 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:09:39.109 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:39.109 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:39.109 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:39.109 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:39.109 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:39.109 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.109 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:39.109 [2024-07-25 13:05:31.256812] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:39.109 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.109 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:39.109 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.109 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:39.367 Malloc0 00:09:39.367 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.367 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:39.367 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.367 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:39.367 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.367 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:39.367 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.367 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:39.367 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.367 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:39.367 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.367 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:39.367 [2024-07-25 13:05:31.330090] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:39.367 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.367 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:39.367 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:39.367 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:09:39.367 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:09:39.367 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:39.367 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:39.367 { 00:09:39.367 "params": { 00:09:39.367 "name": "Nvme$subsystem", 00:09:39.367 "trtype": "$TEST_TRANSPORT", 00:09:39.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:39.367 "adrfam": "ipv4", 00:09:39.367 "trsvcid": "$NVMF_PORT", 00:09:39.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:39.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:39.367 "hdgst": ${hdgst:-false}, 00:09:39.367 "ddgst": ${ddgst:-false} 00:09:39.367 }, 00:09:39.367 "method": "bdev_nvme_attach_controller" 00:09:39.367 } 00:09:39.367 EOF 00:09:39.367 )") 00:09:39.367 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:09:39.367 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:09:39.367 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:09:39.367 13:05:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:39.367 "params": { 00:09:39.367 "name": "Nvme1", 00:09:39.367 "trtype": "tcp", 00:09:39.367 "traddr": "10.0.0.2", 00:09:39.367 "adrfam": "ipv4", 00:09:39.367 "trsvcid": "4420", 00:09:39.367 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:39.367 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:39.367 "hdgst": false, 00:09:39.367 "ddgst": false 00:09:39.367 }, 00:09:39.367 "method": "bdev_nvme_attach_controller" 00:09:39.367 }' 00:09:39.367 [2024-07-25 13:05:31.388127] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:39.367 [2024-07-25 13:05:31.388219] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68156 ] 00:09:39.367 [2024-07-25 13:05:31.527726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:39.625 [2024-07-25 13:05:31.640405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.625 [2024-07-25 13:05:31.640527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:39.625 [2024-07-25 13:05:31.640789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.625 [2024-07-25 13:05:31.708887] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:39.884 I/O targets: 00:09:39.884 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:39.884 00:09:39.884 00:09:39.884 CUnit - A unit testing framework for C - Version 2.1-3 00:09:39.884 http://cunit.sourceforge.net/ 00:09:39.884 00:09:39.884 00:09:39.884 Suite: bdevio tests on: Nvme1n1 00:09:39.884 Test: blockdev write read block ...passed 00:09:39.884 Test: blockdev write zeroes read block ...passed 00:09:39.884 Test: blockdev write zeroes read no split ...passed 00:09:39.884 Test: blockdev write zeroes read split ...passed 00:09:39.884 Test: blockdev write zeroes read split partial ...passed 00:09:39.884 Test: blockdev reset ...[2024-07-25 13:05:31.859002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:39.884 [2024-07-25 13:05:31.859216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd467c0 (9): Bad file descriptor 00:09:39.884 [2024-07-25 13:05:31.876483] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:39.884 passed 00:09:39.884 Test: blockdev write read 8 blocks ...passed 00:09:39.884 Test: blockdev write read size > 128k ...passed 00:09:39.884 Test: blockdev write read invalid size ...passed 00:09:39.884 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:39.884 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:39.884 Test: blockdev write read max offset ...passed 00:09:39.884 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:39.884 Test: blockdev writev readv 8 blocks ...passed 00:09:39.884 Test: blockdev writev readv 30 x 1block ...passed 00:09:39.884 Test: blockdev writev readv block ...passed 00:09:39.884 Test: blockdev writev readv size > 128k ...passed 00:09:39.884 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:39.884 Test: blockdev comparev and writev ...[2024-07-25 13:05:31.883587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:39.884 [2024-07-25 13:05:31.883645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:39.884 [2024-07-25 13:05:31.883682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:39.884 [2024-07-25 13:05:31.883694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:39.884 [2024-07-25 13:05:31.884060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:39.884 [2024-07-25 13:05:31.884083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:39.884 [2024-07-25 13:05:31.884101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:39.884 [2024-07-25 13:05:31.884112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:39.884 [2024-07-25 13:05:31.884476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:39.884 [2024-07-25 13:05:31.884499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:39.884 [2024-07-25 13:05:31.884516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:39.884 [2024-07-25 13:05:31.884528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:39.884 [2024-07-25 13:05:31.884862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:39.884 [2024-07-25 13:05:31.884886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:39.884 [2024-07-25 13:05:31.884904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:39.884 [2024-07-25 13:05:31.884915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:39.884 passed 00:09:39.884 Test: blockdev nvme passthru rw ...passed 00:09:39.884 Test: blockdev nvme passthru vendor specific ...[2024-07-25 13:05:31.885618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:39.884 [2024-07-25 13:05:31.885644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:39.884 [2024-07-25 13:05:31.885753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:39.884 [2024-07-25 13:05:31.885769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:39.884 [2024-07-25 13:05:31.885884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:39.884 [2024-07-25 13:05:31.885905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:39.884 [2024-07-25 13:05:31.886010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:39.884 [2024-07-25 13:05:31.886032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:39.884 passed 00:09:39.884 Test: blockdev nvme admin passthru ...passed 00:09:39.884 Test: blockdev copy ...passed 00:09:39.884 00:09:39.884 Run Summary: Type Total Ran Passed Failed Inactive 00:09:39.884 suites 1 1 n/a 0 0 00:09:39.884 tests 23 23 23 0 0 00:09:39.884 asserts 152 152 152 0 n/a 00:09:39.884 00:09:39.884 Elapsed time = 0.145 seconds 00:09:40.143 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:40.143 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.143 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:40.143 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.143 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:40.143 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:40.143 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:40.143 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:09:40.143 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:40.143 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:09:40.143 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:40.143 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:40.143 rmmod nvme_tcp 00:09:40.143 rmmod nvme_fabrics 00:09:40.143 rmmod nvme_keyring 00:09:40.143 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:40.143 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:09:40.143 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:09:40.143 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 68120 ']' 00:09:40.143 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 68120 00:09:40.143 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 68120 ']' 00:09:40.143 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 68120 00:09:40.143 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:09:40.143 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:40.143 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68120 00:09:40.143 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:09:40.143 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:09:40.143 killing process with pid 68120 00:09:40.143 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68120' 00:09:40.143 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 68120 00:09:40.143 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 68120 00:09:40.402 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:40.402 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:40.402 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:40.402 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:40.402 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:40.402 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.402 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.402 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.402 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:40.402 00:09:40.402 real 0m2.759s 00:09:40.402 user 0m9.343s 00:09:40.402 sys 0m0.740s 00:09:40.402 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:40.402 13:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:40.402 ************************************ 00:09:40.402 END TEST nvmf_bdevio 00:09:40.402 ************************************ 00:09:40.402 13:05:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:40.402 ************************************ 00:09:40.402 END TEST nvmf_target_core 00:09:40.402 ************************************ 00:09:40.402 00:09:40.402 real 2m32.990s 00:09:40.402 user 6m48.598s 00:09:40.402 sys 0m52.157s 00:09:40.402 13:05:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:40.402 13:05:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:40.662 13:05:32 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:40.662 13:05:32 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:40.662 13:05:32 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:40.662 13:05:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:40.662 ************************************ 00:09:40.662 START TEST nvmf_target_extra 00:09:40.662 ************************************ 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:40.662 * Looking for test storage... 00:09:40.662 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.662 13:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:40.663 ************************************ 00:09:40.663 START TEST nvmf_auth_target 00:09:40.663 ************************************ 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:40.663 * Looking for test storage... 00:09:40.663 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:40.663 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:40.922 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:40.922 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:40.922 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:40.922 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:40.922 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:40.922 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:40.922 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:40.922 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:40.922 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:40.922 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:40.922 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:40.922 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:40.922 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:40.922 Cannot find device "nvmf_tgt_br" 00:09:40.922 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:09:40.922 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:40.922 Cannot find device "nvmf_tgt_br2" 00:09:40.922 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:09:40.922 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:40.922 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:40.922 Cannot find device "nvmf_tgt_br" 00:09:40.922 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:09:40.922 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:40.922 Cannot find device "nvmf_tgt_br2" 00:09:40.922 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:09:40.922 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:40.922 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:40.922 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:40.922 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:40.922 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:09:40.922 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:40.922 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:40.922 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:09:40.922 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:40.922 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:40.922 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:40.922 13:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:40.922 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:40.923 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:40.923 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:40.923 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:40.923 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:40.923 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:40.923 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:40.923 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:40.923 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:40.923 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:40.923 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:40.923 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:40.923 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:40.923 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:40.923 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:41.182 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:41.182 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:41.182 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:41.182 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:41.182 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:41.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:09:41.182 00:09:41.182 --- 10.0.0.2 ping statistics --- 00:09:41.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.182 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:09:41.182 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:41.182 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:41.182 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:09:41.182 00:09:41.182 --- 10.0.0.3 ping statistics --- 00:09:41.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.182 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:09:41.182 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:41.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:09:41.182 00:09:41.182 --- 10.0.0.1 ping statistics --- 00:09:41.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.182 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:09:41.182 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.182 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:09:41.182 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:41.182 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.182 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:41.182 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:41.182 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.182 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:41.182 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:41.182 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:09:41.182 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:41.182 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:41.182 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.182 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=68384 00:09:41.182 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:09:41.182 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 68384 00:09:41.182 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 68384 ']' 00:09:41.182 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.182 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:41.182 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.182 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:41.182 13:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.118 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:42.118 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:09:42.118 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:42.118 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:42.118 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.118 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:42.118 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=68416 00:09:42.118 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:09:42.118 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:09:42.118 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:09:42.118 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:42.118 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:42.118 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:42.118 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:09:42.118 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:09:42.118 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:42.118 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cba26cff6bec4f8d7c3ea2199c4fdd106a9cbd044f938d87 00:09:42.118 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:09:42.118 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.WWa 00:09:42.118 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cba26cff6bec4f8d7c3ea2199c4fdd106a9cbd044f938d87 0 00:09:42.118 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cba26cff6bec4f8d7c3ea2199c4fdd106a9cbd044f938d87 0 00:09:42.118 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:42.118 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:42.118 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cba26cff6bec4f8d7c3ea2199c4fdd106a9cbd044f938d87 00:09:42.118 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:09:42.119 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:42.119 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.WWa 00:09:42.119 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.WWa 00:09:42.119 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.WWa 00:09:42.119 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:09:42.119 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:42.119 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:42.119 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:42.119 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:09:42.119 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:09:42.119 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:42.378 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4cbd89e92b8aae3c795558c10e795c187c6d3b1f780421535d8a3c990cc8a5c5 00:09:42.378 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:09:42.378 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ToC 00:09:42.378 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4cbd89e92b8aae3c795558c10e795c187c6d3b1f780421535d8a3c990cc8a5c5 3 00:09:42.378 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4cbd89e92b8aae3c795558c10e795c187c6d3b1f780421535d8a3c990cc8a5c5 3 00:09:42.378 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:42.378 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:42.378 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4cbd89e92b8aae3c795558c10e795c187c6d3b1f780421535d8a3c990cc8a5c5 00:09:42.378 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:09:42.378 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:42.378 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ToC 00:09:42.378 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ToC 00:09:42.378 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.ToC 00:09:42.378 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:09:42.378 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:42.378 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:42.378 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:42.378 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:09:42.378 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:09:42.378 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:42.378 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6deb9a5b1a32f0a45000db2509819d1e 00:09:42.378 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:09:42.378 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.wQP 00:09:42.378 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6deb9a5b1a32f0a45000db2509819d1e 1 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6deb9a5b1a32f0a45000db2509819d1e 1 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6deb9a5b1a32f0a45000db2509819d1e 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.wQP 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.wQP 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.wQP 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ad01c987ab6eeb466fc9ee5870ffd5ddb1b92ec1624634e9 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.yPp 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ad01c987ab6eeb466fc9ee5870ffd5ddb1b92ec1624634e9 2 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ad01c987ab6eeb466fc9ee5870ffd5ddb1b92ec1624634e9 2 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ad01c987ab6eeb466fc9ee5870ffd5ddb1b92ec1624634e9 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.yPp 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.yPp 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.yPp 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2172f5fe30e90f65e1c714b9ecf704fb3c00f0b31b34c6f3 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.EOp 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2172f5fe30e90f65e1c714b9ecf704fb3c00f0b31b34c6f3 2 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2172f5fe30e90f65e1c714b9ecf704fb3c00f0b31b34c6f3 2 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2172f5fe30e90f65e1c714b9ecf704fb3c00f0b31b34c6f3 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:09:42.379 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:42.638 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.EOp 00:09:42.638 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.EOp 00:09:42.638 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.EOp 00:09:42.638 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:09:42.638 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:42.638 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:42.638 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:42.638 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:09:42.638 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a18bc6b0af1ecd91e68ff264b5d5b5a0 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.7DW 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a18bc6b0af1ecd91e68ff264b5d5b5a0 1 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a18bc6b0af1ecd91e68ff264b5d5b5a0 1 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a18bc6b0af1ecd91e68ff264b5d5b5a0 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.7DW 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.7DW 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.7DW 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3de873788ee410598213cf05896e884e26854c8b492a096d0f0a6d7a9ba80530 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ZB0 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3de873788ee410598213cf05896e884e26854c8b492a096d0f0a6d7a9ba80530 3 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3de873788ee410598213cf05896e884e26854c8b492a096d0f0a6d7a9ba80530 3 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3de873788ee410598213cf05896e884e26854c8b492a096d0f0a6d7a9ba80530 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ZB0 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ZB0 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.ZB0 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 68384 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 68384 ']' 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:42.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:42.639 13:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.898 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:42.898 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:09:42.898 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 68416 /var/tmp/host.sock 00:09:42.898 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 68416 ']' 00:09:42.898 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:09:42.898 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:42.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:42.898 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:42.898 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:42.898 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.157 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:43.157 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:09:43.157 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:09:43.157 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.157 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.157 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.157 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:09:43.157 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.WWa 00:09:43.157 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.157 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.157 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.157 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.WWa 00:09:43.157 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.WWa 00:09:43.416 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.ToC ]] 00:09:43.416 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ToC 00:09:43.416 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.416 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.416 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.416 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ToC 00:09:43.416 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ToC 00:09:43.673 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:09:43.673 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.wQP 00:09:43.673 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.673 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.673 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.674 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.wQP 00:09:43.674 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.wQP 00:09:43.932 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.yPp ]] 00:09:43.932 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.yPp 00:09:43.932 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.932 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.932 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.932 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.yPp 00:09:43.932 13:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.yPp 00:09:44.201 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:09:44.201 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.EOp 00:09:44.201 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.201 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.201 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.201 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.EOp 00:09:44.201 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.EOp 00:09:44.201 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.7DW ]] 00:09:44.201 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7DW 00:09:44.201 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.202 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.463 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.463 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7DW 00:09:44.463 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7DW 00:09:44.463 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:09:44.463 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ZB0 00:09:44.463 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.463 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.463 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.463 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.ZB0 00:09:44.463 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.ZB0 00:09:44.721 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:09:44.721 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:09:44.721 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:09:44.721 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:44.721 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:44.721 13:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:44.980 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:09:44.980 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:44.980 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:44.980 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:09:44.980 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:09:44.980 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:44.980 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:44.980 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.980 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.980 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.980 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:44.980 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:45.238 00:09:45.238 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:45.238 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:45.238 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:45.499 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:45.499 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:45.499 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.499 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.499 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.499 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:45.499 { 00:09:45.499 "cntlid": 1, 00:09:45.499 "qid": 0, 00:09:45.499 "state": "enabled", 00:09:45.499 "thread": "nvmf_tgt_poll_group_000", 00:09:45.499 "listen_address": { 00:09:45.499 "trtype": "TCP", 00:09:45.499 "adrfam": "IPv4", 00:09:45.499 "traddr": "10.0.0.2", 00:09:45.499 "trsvcid": "4420" 00:09:45.499 }, 00:09:45.499 "peer_address": { 00:09:45.499 "trtype": "TCP", 00:09:45.499 "adrfam": "IPv4", 00:09:45.499 "traddr": "10.0.0.1", 00:09:45.499 "trsvcid": "40182" 00:09:45.499 }, 00:09:45.499 "auth": { 00:09:45.499 "state": "completed", 00:09:45.499 "digest": "sha256", 00:09:45.499 "dhgroup": "null" 00:09:45.499 } 00:09:45.499 } 00:09:45.499 ]' 00:09:45.499 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:45.499 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:45.499 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:45.762 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:09:45.762 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:45.762 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:45.762 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:45.762 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:46.021 13:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:00:Y2JhMjZjZmY2YmVjNGY4ZDdjM2VhMjE5OWM0ZmRkMTA2YTljYmQwNDRmOTM4ZDg3nZhzjA==: --dhchap-ctrl-secret DHHC-1:03:NGNiZDg5ZTkyYjhhYWUzYzc5NTU1OGMxMGU3OTVjMTg3YzZkM2IxZjc4MDQyMTUzNWQ4YTNjOTkwY2M4YTVjNYuaayI=: 00:09:50.207 13:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:50.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:50.207 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:09:50.207 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.207 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.207 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.207 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:50.207 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:50.207 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:50.207 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:09:50.207 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:50.207 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:50.207 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:09:50.207 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:09:50.207 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:50.207 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:50.207 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.207 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.207 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.207 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:50.207 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:50.466 00:09:50.466 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:50.466 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:50.466 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:50.725 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:50.725 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:50.725 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.725 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.725 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.725 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:50.725 { 00:09:50.725 "cntlid": 3, 00:09:50.725 "qid": 0, 00:09:50.725 "state": "enabled", 00:09:50.725 "thread": "nvmf_tgt_poll_group_000", 00:09:50.725 "listen_address": { 00:09:50.725 "trtype": "TCP", 00:09:50.725 "adrfam": "IPv4", 00:09:50.725 "traddr": "10.0.0.2", 00:09:50.725 "trsvcid": "4420" 00:09:50.725 }, 00:09:50.725 "peer_address": { 00:09:50.725 "trtype": "TCP", 00:09:50.725 "adrfam": "IPv4", 00:09:50.725 "traddr": "10.0.0.1", 00:09:50.725 "trsvcid": "39568" 00:09:50.725 }, 00:09:50.725 "auth": { 00:09:50.725 "state": "completed", 00:09:50.725 "digest": "sha256", 00:09:50.725 "dhgroup": "null" 00:09:50.725 } 00:09:50.725 } 00:09:50.725 ]' 00:09:50.725 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:50.725 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:50.725 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:50.984 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:09:50.984 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:50.984 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:50.984 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:50.984 13:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:51.243 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:01:NmRlYjlhNWIxYTMyZjBhNDUwMDBkYjI1MDk4MTlkMWXV+sth: --dhchap-ctrl-secret DHHC-1:02:YWQwMWM5ODdhYjZlZWI0NjZmYzllZTU4NzBmZmQ1ZGRiMWI5MmVjMTYyNDYzNGU5WMMtrw==: 00:09:51.810 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:51.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:51.810 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:09:51.810 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.810 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.810 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.810 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:51.810 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:51.810 13:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:52.069 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:09:52.069 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:52.069 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:52.069 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:09:52.069 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:09:52.069 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:52.069 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:52.069 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.069 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.069 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.069 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:52.069 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:52.327 00:09:52.327 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:52.327 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:52.327 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:52.327 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:52.586 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:52.586 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.586 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.586 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.586 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:52.586 { 00:09:52.586 "cntlid": 5, 00:09:52.586 "qid": 0, 00:09:52.586 "state": "enabled", 00:09:52.586 "thread": "nvmf_tgt_poll_group_000", 00:09:52.586 "listen_address": { 00:09:52.586 "trtype": "TCP", 00:09:52.586 "adrfam": "IPv4", 00:09:52.586 "traddr": "10.0.0.2", 00:09:52.586 "trsvcid": "4420" 00:09:52.586 }, 00:09:52.586 "peer_address": { 00:09:52.586 "trtype": "TCP", 00:09:52.586 "adrfam": "IPv4", 00:09:52.586 "traddr": "10.0.0.1", 00:09:52.586 "trsvcid": "39586" 00:09:52.586 }, 00:09:52.586 "auth": { 00:09:52.586 "state": "completed", 00:09:52.586 "digest": "sha256", 00:09:52.586 "dhgroup": "null" 00:09:52.586 } 00:09:52.586 } 00:09:52.586 ]' 00:09:52.586 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:52.586 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:52.586 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:52.586 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:09:52.586 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:52.586 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:52.586 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:52.586 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:52.844 13:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:02:MjE3MmY1ZmUzMGU5MGY2NWUxYzcxNGI5ZWNmNzA0ZmIzYzAwZjBiMzFiMzRjNmYznJnGZg==: --dhchap-ctrl-secret DHHC-1:01:YTE4YmM2YjBhZjFlY2Q5MWU2OGZmMjY0YjVkNWI1YTDZ19cj: 00:09:53.409 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:53.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:53.409 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:09:53.409 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.409 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.409 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.409 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:53.409 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:53.409 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:53.667 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:09:53.667 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:53.667 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:53.667 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:09:53.667 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:09:53.667 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:53.667 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key3 00:09:53.667 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.667 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.667 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.667 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:09:53.667 13:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:09:53.925 00:09:53.925 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:53.925 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:53.925 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:54.184 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:54.184 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:54.184 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.184 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.442 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.442 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:54.442 { 00:09:54.442 "cntlid": 7, 00:09:54.442 "qid": 0, 00:09:54.442 "state": "enabled", 00:09:54.442 "thread": "nvmf_tgt_poll_group_000", 00:09:54.442 "listen_address": { 00:09:54.442 "trtype": "TCP", 00:09:54.442 "adrfam": "IPv4", 00:09:54.442 "traddr": "10.0.0.2", 00:09:54.442 "trsvcid": "4420" 00:09:54.442 }, 00:09:54.442 "peer_address": { 00:09:54.442 "trtype": "TCP", 00:09:54.442 "adrfam": "IPv4", 00:09:54.442 "traddr": "10.0.0.1", 00:09:54.442 "trsvcid": "39608" 00:09:54.442 }, 00:09:54.442 "auth": { 00:09:54.442 "state": "completed", 00:09:54.442 "digest": "sha256", 00:09:54.442 "dhgroup": "null" 00:09:54.442 } 00:09:54.442 } 00:09:54.442 ]' 00:09:54.442 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:54.442 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:54.442 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:54.442 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:09:54.442 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:54.442 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:54.442 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:54.442 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:54.700 13:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:03:M2RlODczNzg4ZWU0MTA1OTgyMTNjZjA1ODk2ZTg4NGUyNjg1NGM4YjQ5MmEwOTZkMGYwYTZkN2E5YmE4MDUzMOYuNsI=: 00:09:55.267 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:55.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:55.267 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:09:55.267 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.267 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.267 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.267 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:09:55.267 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:55.267 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:55.267 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:55.525 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:09:55.525 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:55.525 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:55.525 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:09:55.525 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:09:55.525 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:55.525 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:55.525 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.525 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.525 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.525 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:55.525 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:55.784 00:09:55.784 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:55.784 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:55.784 13:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:56.043 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:56.043 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:56.043 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.043 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.043 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.043 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:56.043 { 00:09:56.043 "cntlid": 9, 00:09:56.043 "qid": 0, 00:09:56.043 "state": "enabled", 00:09:56.043 "thread": "nvmf_tgt_poll_group_000", 00:09:56.043 "listen_address": { 00:09:56.043 "trtype": "TCP", 00:09:56.043 "adrfam": "IPv4", 00:09:56.043 "traddr": "10.0.0.2", 00:09:56.043 "trsvcid": "4420" 00:09:56.043 }, 00:09:56.043 "peer_address": { 00:09:56.043 "trtype": "TCP", 00:09:56.043 "adrfam": "IPv4", 00:09:56.043 "traddr": "10.0.0.1", 00:09:56.043 "trsvcid": "39624" 00:09:56.043 }, 00:09:56.043 "auth": { 00:09:56.043 "state": "completed", 00:09:56.043 "digest": "sha256", 00:09:56.043 "dhgroup": "ffdhe2048" 00:09:56.043 } 00:09:56.043 } 00:09:56.043 ]' 00:09:56.043 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:56.302 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:56.302 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:56.302 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:56.302 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:56.302 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:56.302 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:56.302 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:56.561 13:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:00:Y2JhMjZjZmY2YmVjNGY4ZDdjM2VhMjE5OWM0ZmRkMTA2YTljYmQwNDRmOTM4ZDg3nZhzjA==: --dhchap-ctrl-secret DHHC-1:03:NGNiZDg5ZTkyYjhhYWUzYzc5NTU1OGMxMGU3OTVjMTg3YzZkM2IxZjc4MDQyMTUzNWQ4YTNjOTkwY2M4YTVjNYuaayI=: 00:09:57.128 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:57.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:57.128 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:09:57.128 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.128 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.128 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.128 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:57.128 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:57.128 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:57.386 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:09:57.386 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:57.386 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:57.386 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:09:57.386 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:09:57.386 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:57.386 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:57.386 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.386 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.386 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.386 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:57.386 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:57.952 00:09:57.952 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:57.952 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:09:57.952 13:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:58.211 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:58.211 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:58.211 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.211 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.211 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.211 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:09:58.211 { 00:09:58.211 "cntlid": 11, 00:09:58.211 "qid": 0, 00:09:58.211 "state": "enabled", 00:09:58.211 "thread": "nvmf_tgt_poll_group_000", 00:09:58.211 "listen_address": { 00:09:58.211 "trtype": "TCP", 00:09:58.211 "adrfam": "IPv4", 00:09:58.211 "traddr": "10.0.0.2", 00:09:58.211 "trsvcid": "4420" 00:09:58.211 }, 00:09:58.211 "peer_address": { 00:09:58.211 "trtype": "TCP", 00:09:58.211 "adrfam": "IPv4", 00:09:58.211 "traddr": "10.0.0.1", 00:09:58.211 "trsvcid": "39656" 00:09:58.211 }, 00:09:58.211 "auth": { 00:09:58.211 "state": "completed", 00:09:58.211 "digest": "sha256", 00:09:58.211 "dhgroup": "ffdhe2048" 00:09:58.211 } 00:09:58.211 } 00:09:58.211 ]' 00:09:58.211 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:09:58.211 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:58.211 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:09:58.211 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:58.211 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:09:58.211 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:58.211 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:58.211 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:58.469 13:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:01:NmRlYjlhNWIxYTMyZjBhNDUwMDBkYjI1MDk4MTlkMWXV+sth: --dhchap-ctrl-secret DHHC-1:02:YWQwMWM5ODdhYjZlZWI0NjZmYzllZTU4NzBmZmQ1ZGRiMWI5MmVjMTYyNDYzNGU5WMMtrw==: 00:09:59.405 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:59.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:59.405 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:09:59.406 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.406 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.406 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.406 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:09:59.406 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:59.406 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:59.406 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:09:59.406 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:09:59.406 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:09:59.406 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:09:59.406 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:09:59.406 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:59.406 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:59.406 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.406 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.406 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.406 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:59.406 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:59.974 00:09:59.974 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:09:59.974 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:59.974 13:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:00.234 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:00.234 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:00.234 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.234 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.234 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.234 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:00.234 { 00:10:00.234 "cntlid": 13, 00:10:00.234 "qid": 0, 00:10:00.234 "state": "enabled", 00:10:00.234 "thread": "nvmf_tgt_poll_group_000", 00:10:00.234 "listen_address": { 00:10:00.234 "trtype": "TCP", 00:10:00.234 "adrfam": "IPv4", 00:10:00.234 "traddr": "10.0.0.2", 00:10:00.234 "trsvcid": "4420" 00:10:00.234 }, 00:10:00.234 "peer_address": { 00:10:00.234 "trtype": "TCP", 00:10:00.234 "adrfam": "IPv4", 00:10:00.234 "traddr": "10.0.0.1", 00:10:00.234 "trsvcid": "58994" 00:10:00.234 }, 00:10:00.234 "auth": { 00:10:00.234 "state": "completed", 00:10:00.234 "digest": "sha256", 00:10:00.234 "dhgroup": "ffdhe2048" 00:10:00.234 } 00:10:00.234 } 00:10:00.234 ]' 00:10:00.234 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:00.234 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:00.234 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:00.234 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:00.234 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:00.234 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:00.234 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:00.234 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:00.515 13:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:02:MjE3MmY1ZmUzMGU5MGY2NWUxYzcxNGI5ZWNmNzA0ZmIzYzAwZjBiMzFiMzRjNmYznJnGZg==: --dhchap-ctrl-secret DHHC-1:01:YTE4YmM2YjBhZjFlY2Q5MWU2OGZmMjY0YjVkNWI1YTDZ19cj: 00:10:01.082 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:01.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:01.083 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:10:01.083 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.083 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.083 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.083 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:01.083 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:01.083 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:01.340 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:10:01.340 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:01.340 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:01.340 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:01.340 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:01.340 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:01.340 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key3 00:10:01.340 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.340 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.598 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.598 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:01.598 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:01.856 00:10:01.856 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:01.856 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:01.856 13:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:02.113 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:02.113 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:02.113 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.113 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.113 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.113 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:02.113 { 00:10:02.113 "cntlid": 15, 00:10:02.113 "qid": 0, 00:10:02.113 "state": "enabled", 00:10:02.113 "thread": "nvmf_tgt_poll_group_000", 00:10:02.113 "listen_address": { 00:10:02.113 "trtype": "TCP", 00:10:02.113 "adrfam": "IPv4", 00:10:02.113 "traddr": "10.0.0.2", 00:10:02.113 "trsvcid": "4420" 00:10:02.113 }, 00:10:02.113 "peer_address": { 00:10:02.113 "trtype": "TCP", 00:10:02.113 "adrfam": "IPv4", 00:10:02.113 "traddr": "10.0.0.1", 00:10:02.113 "trsvcid": "59010" 00:10:02.113 }, 00:10:02.113 "auth": { 00:10:02.113 "state": "completed", 00:10:02.113 "digest": "sha256", 00:10:02.113 "dhgroup": "ffdhe2048" 00:10:02.113 } 00:10:02.113 } 00:10:02.113 ]' 00:10:02.113 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:02.113 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:02.113 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:02.113 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:02.113 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:02.113 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:02.113 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:02.113 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:02.370 13:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:03:M2RlODczNzg4ZWU0MTA1OTgyMTNjZjA1ODk2ZTg4NGUyNjg1NGM4YjQ5MmEwOTZkMGYwYTZkN2E5YmE4MDUzMOYuNsI=: 00:10:03.303 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:03.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:03.303 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:10:03.303 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.303 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.303 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.303 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:03.303 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:03.303 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:03.303 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:03.303 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:10:03.303 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:03.303 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:03.303 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:03.303 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:03.303 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:03.303 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:03.303 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.303 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.303 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.303 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:03.303 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:03.869 00:10:03.869 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:03.869 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:03.869 13:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:04.128 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:04.128 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:04.128 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.128 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.128 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.128 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:04.128 { 00:10:04.128 "cntlid": 17, 00:10:04.128 "qid": 0, 00:10:04.128 "state": "enabled", 00:10:04.128 "thread": "nvmf_tgt_poll_group_000", 00:10:04.128 "listen_address": { 00:10:04.128 "trtype": "TCP", 00:10:04.128 "adrfam": "IPv4", 00:10:04.128 "traddr": "10.0.0.2", 00:10:04.128 "trsvcid": "4420" 00:10:04.128 }, 00:10:04.128 "peer_address": { 00:10:04.128 "trtype": "TCP", 00:10:04.128 "adrfam": "IPv4", 00:10:04.128 "traddr": "10.0.0.1", 00:10:04.128 "trsvcid": "59030" 00:10:04.128 }, 00:10:04.128 "auth": { 00:10:04.128 "state": "completed", 00:10:04.128 "digest": "sha256", 00:10:04.128 "dhgroup": "ffdhe3072" 00:10:04.128 } 00:10:04.128 } 00:10:04.128 ]' 00:10:04.128 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:04.128 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:04.128 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:04.128 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:04.128 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:04.128 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:04.128 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:04.128 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:04.386 13:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:00:Y2JhMjZjZmY2YmVjNGY4ZDdjM2VhMjE5OWM0ZmRkMTA2YTljYmQwNDRmOTM4ZDg3nZhzjA==: --dhchap-ctrl-secret DHHC-1:03:NGNiZDg5ZTkyYjhhYWUzYzc5NTU1OGMxMGU3OTVjMTg3YzZkM2IxZjc4MDQyMTUzNWQ4YTNjOTkwY2M4YTVjNYuaayI=: 00:10:05.320 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:05.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:05.320 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:10:05.320 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.320 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.320 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.320 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:05.320 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:05.320 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:05.320 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:10:05.320 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:05.320 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:05.320 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:05.320 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:05.320 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:05.320 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:05.320 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.320 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.320 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.320 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:05.320 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:05.900 00:10:05.900 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:05.900 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:05.900 13:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:06.157 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:06.157 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:06.157 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.157 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.157 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.157 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:06.157 { 00:10:06.157 "cntlid": 19, 00:10:06.157 "qid": 0, 00:10:06.157 "state": "enabled", 00:10:06.157 "thread": "nvmf_tgt_poll_group_000", 00:10:06.157 "listen_address": { 00:10:06.157 "trtype": "TCP", 00:10:06.157 "adrfam": "IPv4", 00:10:06.157 "traddr": "10.0.0.2", 00:10:06.157 "trsvcid": "4420" 00:10:06.157 }, 00:10:06.157 "peer_address": { 00:10:06.157 "trtype": "TCP", 00:10:06.157 "adrfam": "IPv4", 00:10:06.157 "traddr": "10.0.0.1", 00:10:06.157 "trsvcid": "59068" 00:10:06.157 }, 00:10:06.157 "auth": { 00:10:06.157 "state": "completed", 00:10:06.157 "digest": "sha256", 00:10:06.157 "dhgroup": "ffdhe3072" 00:10:06.157 } 00:10:06.157 } 00:10:06.157 ]' 00:10:06.157 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:06.157 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:06.157 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:06.157 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:06.157 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:06.157 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:06.157 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:06.157 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:06.415 13:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:01:NmRlYjlhNWIxYTMyZjBhNDUwMDBkYjI1MDk4MTlkMWXV+sth: --dhchap-ctrl-secret DHHC-1:02:YWQwMWM5ODdhYjZlZWI0NjZmYzllZTU4NzBmZmQ1ZGRiMWI5MmVjMTYyNDYzNGU5WMMtrw==: 00:10:07.361 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:07.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:07.361 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:10:07.361 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.361 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.361 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.361 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:07.361 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:07.361 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:07.361 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:10:07.361 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:07.361 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:07.361 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:07.361 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:07.361 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:07.361 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:07.361 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.361 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.361 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.361 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:07.361 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:07.927 00:10:07.927 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:07.927 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:07.927 13:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:08.185 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:08.185 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:08.185 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.185 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.185 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.185 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:08.185 { 00:10:08.185 "cntlid": 21, 00:10:08.185 "qid": 0, 00:10:08.185 "state": "enabled", 00:10:08.185 "thread": "nvmf_tgt_poll_group_000", 00:10:08.185 "listen_address": { 00:10:08.185 "trtype": "TCP", 00:10:08.185 "adrfam": "IPv4", 00:10:08.185 "traddr": "10.0.0.2", 00:10:08.185 "trsvcid": "4420" 00:10:08.185 }, 00:10:08.186 "peer_address": { 00:10:08.186 "trtype": "TCP", 00:10:08.186 "adrfam": "IPv4", 00:10:08.186 "traddr": "10.0.0.1", 00:10:08.186 "trsvcid": "59094" 00:10:08.186 }, 00:10:08.186 "auth": { 00:10:08.186 "state": "completed", 00:10:08.186 "digest": "sha256", 00:10:08.186 "dhgroup": "ffdhe3072" 00:10:08.186 } 00:10:08.186 } 00:10:08.186 ]' 00:10:08.186 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:08.186 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:08.186 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:08.186 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:08.186 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:08.186 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:08.186 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:08.186 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:08.444 13:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:02:MjE3MmY1ZmUzMGU5MGY2NWUxYzcxNGI5ZWNmNzA0ZmIzYzAwZjBiMzFiMzRjNmYznJnGZg==: --dhchap-ctrl-secret DHHC-1:01:YTE4YmM2YjBhZjFlY2Q5MWU2OGZmMjY0YjVkNWI1YTDZ19cj: 00:10:09.380 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:09.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:09.380 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:10:09.380 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.380 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.380 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.380 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:09.380 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:09.380 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:09.380 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:10:09.380 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:09.380 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:09.380 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:09.380 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:09.380 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:09.380 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key3 00:10:09.380 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.380 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.380 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.380 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:09.380 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:09.946 00:10:09.946 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:09.946 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:09.946 13:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:09.946 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:09.946 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:09.946 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.946 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.204 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.204 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:10.204 { 00:10:10.204 "cntlid": 23, 00:10:10.204 "qid": 0, 00:10:10.204 "state": "enabled", 00:10:10.204 "thread": "nvmf_tgt_poll_group_000", 00:10:10.204 "listen_address": { 00:10:10.204 "trtype": "TCP", 00:10:10.204 "adrfam": "IPv4", 00:10:10.204 "traddr": "10.0.0.2", 00:10:10.204 "trsvcid": "4420" 00:10:10.204 }, 00:10:10.204 "peer_address": { 00:10:10.204 "trtype": "TCP", 00:10:10.204 "adrfam": "IPv4", 00:10:10.204 "traddr": "10.0.0.1", 00:10:10.204 "trsvcid": "59136" 00:10:10.204 }, 00:10:10.204 "auth": { 00:10:10.204 "state": "completed", 00:10:10.204 "digest": "sha256", 00:10:10.204 "dhgroup": "ffdhe3072" 00:10:10.204 } 00:10:10.204 } 00:10:10.204 ]' 00:10:10.204 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:10.204 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:10.204 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:10.204 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:10.204 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:10.204 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:10.204 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:10.204 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:10.463 13:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:03:M2RlODczNzg4ZWU0MTA1OTgyMTNjZjA1ODk2ZTg4NGUyNjg1NGM4YjQ5MmEwOTZkMGYwYTZkN2E5YmE4MDUzMOYuNsI=: 00:10:11.030 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:11.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:11.289 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:10:11.289 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.289 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.289 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.289 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:11.289 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:11.289 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:11.289 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:11.547 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:10:11.547 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:11.547 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:11.547 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:11.547 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:11.547 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:11.547 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:11.547 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.547 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.547 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.547 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:11.547 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:11.805 00:10:11.805 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:11.805 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:11.805 13:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:12.063 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:12.063 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:12.063 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.063 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.063 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.063 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:12.063 { 00:10:12.063 "cntlid": 25, 00:10:12.063 "qid": 0, 00:10:12.063 "state": "enabled", 00:10:12.063 "thread": "nvmf_tgt_poll_group_000", 00:10:12.063 "listen_address": { 00:10:12.063 "trtype": "TCP", 00:10:12.063 "adrfam": "IPv4", 00:10:12.063 "traddr": "10.0.0.2", 00:10:12.063 "trsvcid": "4420" 00:10:12.063 }, 00:10:12.063 "peer_address": { 00:10:12.063 "trtype": "TCP", 00:10:12.063 "adrfam": "IPv4", 00:10:12.063 "traddr": "10.0.0.1", 00:10:12.063 "trsvcid": "40876" 00:10:12.063 }, 00:10:12.063 "auth": { 00:10:12.063 "state": "completed", 00:10:12.063 "digest": "sha256", 00:10:12.063 "dhgroup": "ffdhe4096" 00:10:12.063 } 00:10:12.063 } 00:10:12.063 ]' 00:10:12.063 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:12.063 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:12.063 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:12.063 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:12.063 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:12.321 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:12.321 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:12.321 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:12.321 13:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:00:Y2JhMjZjZmY2YmVjNGY4ZDdjM2VhMjE5OWM0ZmRkMTA2YTljYmQwNDRmOTM4ZDg3nZhzjA==: --dhchap-ctrl-secret DHHC-1:03:NGNiZDg5ZTkyYjhhYWUzYzc5NTU1OGMxMGU3OTVjMTg3YzZkM2IxZjc4MDQyMTUzNWQ4YTNjOTkwY2M4YTVjNYuaayI=: 00:10:13.258 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:13.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:13.259 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:10:13.259 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.259 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.259 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.259 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:13.259 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:13.259 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:13.259 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:10:13.259 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:13.259 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:13.259 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:13.259 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:13.259 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:13.259 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:13.259 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.259 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.259 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.259 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:13.259 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:13.517 00:10:13.517 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:13.517 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:13.517 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:13.775 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:13.775 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:13.775 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.775 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.034 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.034 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:14.034 { 00:10:14.034 "cntlid": 27, 00:10:14.034 "qid": 0, 00:10:14.034 "state": "enabled", 00:10:14.034 "thread": "nvmf_tgt_poll_group_000", 00:10:14.034 "listen_address": { 00:10:14.034 "trtype": "TCP", 00:10:14.034 "adrfam": "IPv4", 00:10:14.034 "traddr": "10.0.0.2", 00:10:14.034 "trsvcid": "4420" 00:10:14.034 }, 00:10:14.034 "peer_address": { 00:10:14.034 "trtype": "TCP", 00:10:14.034 "adrfam": "IPv4", 00:10:14.034 "traddr": "10.0.0.1", 00:10:14.034 "trsvcid": "40912" 00:10:14.034 }, 00:10:14.034 "auth": { 00:10:14.034 "state": "completed", 00:10:14.034 "digest": "sha256", 00:10:14.034 "dhgroup": "ffdhe4096" 00:10:14.034 } 00:10:14.034 } 00:10:14.034 ]' 00:10:14.034 13:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:14.034 13:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:14.034 13:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:14.034 13:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:14.034 13:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:14.034 13:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:14.034 13:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:14.034 13:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:14.292 13:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:01:NmRlYjlhNWIxYTMyZjBhNDUwMDBkYjI1MDk4MTlkMWXV+sth: --dhchap-ctrl-secret DHHC-1:02:YWQwMWM5ODdhYjZlZWI0NjZmYzllZTU4NzBmZmQ1ZGRiMWI5MmVjMTYyNDYzNGU5WMMtrw==: 00:10:15.228 13:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:15.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:15.228 13:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:10:15.228 13:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.228 13:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.228 13:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.228 13:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:15.228 13:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:15.228 13:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:15.228 13:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:10:15.228 13:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:15.228 13:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:15.228 13:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:15.228 13:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:15.228 13:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:15.228 13:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:15.228 13:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.228 13:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.228 13:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.228 13:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:15.228 13:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:15.794 00:10:15.794 13:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:15.794 13:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:15.794 13:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:16.053 13:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:16.053 13:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:16.053 13:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.053 13:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.053 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.054 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:16.054 { 00:10:16.054 "cntlid": 29, 00:10:16.054 "qid": 0, 00:10:16.054 "state": "enabled", 00:10:16.054 "thread": "nvmf_tgt_poll_group_000", 00:10:16.054 "listen_address": { 00:10:16.054 "trtype": "TCP", 00:10:16.054 "adrfam": "IPv4", 00:10:16.054 "traddr": "10.0.0.2", 00:10:16.054 "trsvcid": "4420" 00:10:16.054 }, 00:10:16.054 "peer_address": { 00:10:16.054 "trtype": "TCP", 00:10:16.054 "adrfam": "IPv4", 00:10:16.054 "traddr": "10.0.0.1", 00:10:16.054 "trsvcid": "40942" 00:10:16.054 }, 00:10:16.054 "auth": { 00:10:16.054 "state": "completed", 00:10:16.054 "digest": "sha256", 00:10:16.054 "dhgroup": "ffdhe4096" 00:10:16.054 } 00:10:16.054 } 00:10:16.054 ]' 00:10:16.054 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:16.054 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:16.054 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:16.054 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:16.054 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:16.054 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:16.054 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:16.054 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:16.313 13:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:02:MjE3MmY1ZmUzMGU5MGY2NWUxYzcxNGI5ZWNmNzA0ZmIzYzAwZjBiMzFiMzRjNmYznJnGZg==: --dhchap-ctrl-secret DHHC-1:01:YTE4YmM2YjBhZjFlY2Q5MWU2OGZmMjY0YjVkNWI1YTDZ19cj: 00:10:17.248 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:17.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:17.248 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:10:17.248 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.248 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.248 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.248 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:17.248 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:17.248 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:17.248 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:10:17.248 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:17.248 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:17.248 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:17.248 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:17.248 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:17.248 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key3 00:10:17.248 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.248 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.248 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.249 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:17.249 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:17.816 00:10:17.816 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:17.816 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:17.816 13:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:18.074 13:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:18.074 13:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:18.075 13:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.075 13:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.075 13:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.075 13:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:18.075 { 00:10:18.075 "cntlid": 31, 00:10:18.075 "qid": 0, 00:10:18.075 "state": "enabled", 00:10:18.075 "thread": "nvmf_tgt_poll_group_000", 00:10:18.075 "listen_address": { 00:10:18.075 "trtype": "TCP", 00:10:18.075 "adrfam": "IPv4", 00:10:18.075 "traddr": "10.0.0.2", 00:10:18.075 "trsvcid": "4420" 00:10:18.075 }, 00:10:18.075 "peer_address": { 00:10:18.075 "trtype": "TCP", 00:10:18.075 "adrfam": "IPv4", 00:10:18.075 "traddr": "10.0.0.1", 00:10:18.075 "trsvcid": "40972" 00:10:18.075 }, 00:10:18.075 "auth": { 00:10:18.075 "state": "completed", 00:10:18.075 "digest": "sha256", 00:10:18.075 "dhgroup": "ffdhe4096" 00:10:18.075 } 00:10:18.075 } 00:10:18.075 ]' 00:10:18.075 13:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:18.075 13:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:18.075 13:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:18.075 13:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:18.075 13:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:18.075 13:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:18.075 13:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:18.075 13:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:18.333 13:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:03:M2RlODczNzg4ZWU0MTA1OTgyMTNjZjA1ODk2ZTg4NGUyNjg1NGM4YjQ5MmEwOTZkMGYwYTZkN2E5YmE4MDUzMOYuNsI=: 00:10:18.903 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:18.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:18.903 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:10:18.903 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.903 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.174 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.174 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:19.174 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:19.174 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:19.174 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:19.433 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:10:19.433 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:19.433 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:19.433 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:19.433 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:19.433 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:19.433 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:19.433 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.433 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.433 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.433 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:19.433 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:19.691 00:10:19.691 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:19.691 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:19.691 13:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:20.258 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:20.258 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:20.258 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.258 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.258 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.258 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:20.258 { 00:10:20.258 "cntlid": 33, 00:10:20.258 "qid": 0, 00:10:20.258 "state": "enabled", 00:10:20.258 "thread": "nvmf_tgt_poll_group_000", 00:10:20.258 "listen_address": { 00:10:20.258 "trtype": "TCP", 00:10:20.258 "adrfam": "IPv4", 00:10:20.258 "traddr": "10.0.0.2", 00:10:20.258 "trsvcid": "4420" 00:10:20.258 }, 00:10:20.258 "peer_address": { 00:10:20.258 "trtype": "TCP", 00:10:20.258 "adrfam": "IPv4", 00:10:20.258 "traddr": "10.0.0.1", 00:10:20.258 "trsvcid": "41010" 00:10:20.258 }, 00:10:20.258 "auth": { 00:10:20.258 "state": "completed", 00:10:20.258 "digest": "sha256", 00:10:20.258 "dhgroup": "ffdhe6144" 00:10:20.258 } 00:10:20.258 } 00:10:20.258 ]' 00:10:20.258 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:20.258 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:20.258 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:20.258 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:20.258 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:20.259 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:20.259 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:20.259 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:20.517 13:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:00:Y2JhMjZjZmY2YmVjNGY4ZDdjM2VhMjE5OWM0ZmRkMTA2YTljYmQwNDRmOTM4ZDg3nZhzjA==: --dhchap-ctrl-secret DHHC-1:03:NGNiZDg5ZTkyYjhhYWUzYzc5NTU1OGMxMGU3OTVjMTg3YzZkM2IxZjc4MDQyMTUzNWQ4YTNjOTkwY2M4YTVjNYuaayI=: 00:10:21.452 13:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:21.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:21.452 13:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:10:21.452 13:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.452 13:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.452 13:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.452 13:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:21.452 13:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:21.452 13:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:21.452 13:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:10:21.452 13:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:21.452 13:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:21.452 13:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:21.452 13:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:21.452 13:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:21.452 13:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:21.452 13:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.452 13:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.452 13:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.452 13:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:21.452 13:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:22.020 00:10:22.020 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:22.020 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:22.020 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:22.279 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:22.279 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:22.279 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.279 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.279 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.279 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:22.279 { 00:10:22.279 "cntlid": 35, 00:10:22.279 "qid": 0, 00:10:22.279 "state": "enabled", 00:10:22.279 "thread": "nvmf_tgt_poll_group_000", 00:10:22.279 "listen_address": { 00:10:22.279 "trtype": "TCP", 00:10:22.279 "adrfam": "IPv4", 00:10:22.279 "traddr": "10.0.0.2", 00:10:22.279 "trsvcid": "4420" 00:10:22.279 }, 00:10:22.279 "peer_address": { 00:10:22.279 "trtype": "TCP", 00:10:22.279 "adrfam": "IPv4", 00:10:22.279 "traddr": "10.0.0.1", 00:10:22.279 "trsvcid": "33544" 00:10:22.279 }, 00:10:22.279 "auth": { 00:10:22.279 "state": "completed", 00:10:22.279 "digest": "sha256", 00:10:22.279 "dhgroup": "ffdhe6144" 00:10:22.279 } 00:10:22.279 } 00:10:22.279 ]' 00:10:22.279 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:22.279 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:22.279 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:22.279 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:22.279 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:22.537 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:22.537 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:22.537 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:22.796 13:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:01:NmRlYjlhNWIxYTMyZjBhNDUwMDBkYjI1MDk4MTlkMWXV+sth: --dhchap-ctrl-secret DHHC-1:02:YWQwMWM5ODdhYjZlZWI0NjZmYzllZTU4NzBmZmQ1ZGRiMWI5MmVjMTYyNDYzNGU5WMMtrw==: 00:10:23.363 13:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:23.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:23.363 13:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:10:23.363 13:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.363 13:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.363 13:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.363 13:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:23.363 13:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:23.363 13:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:23.621 13:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:10:23.621 13:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:23.621 13:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:23.621 13:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:23.621 13:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:23.621 13:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:23.621 13:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:23.621 13:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.621 13:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.621 13:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.621 13:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:23.621 13:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:24.215 00:10:24.215 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:24.215 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:24.215 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:24.489 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:24.489 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:24.489 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.489 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.490 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.490 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:24.490 { 00:10:24.490 "cntlid": 37, 00:10:24.490 "qid": 0, 00:10:24.490 "state": "enabled", 00:10:24.490 "thread": "nvmf_tgt_poll_group_000", 00:10:24.490 "listen_address": { 00:10:24.490 "trtype": "TCP", 00:10:24.490 "adrfam": "IPv4", 00:10:24.490 "traddr": "10.0.0.2", 00:10:24.490 "trsvcid": "4420" 00:10:24.490 }, 00:10:24.490 "peer_address": { 00:10:24.490 "trtype": "TCP", 00:10:24.490 "adrfam": "IPv4", 00:10:24.490 "traddr": "10.0.0.1", 00:10:24.490 "trsvcid": "33554" 00:10:24.490 }, 00:10:24.490 "auth": { 00:10:24.490 "state": "completed", 00:10:24.490 "digest": "sha256", 00:10:24.490 "dhgroup": "ffdhe6144" 00:10:24.490 } 00:10:24.490 } 00:10:24.490 ]' 00:10:24.490 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:24.490 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:24.490 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:24.490 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:24.490 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:24.490 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:24.490 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:24.490 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:24.748 13:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:02:MjE3MmY1ZmUzMGU5MGY2NWUxYzcxNGI5ZWNmNzA0ZmIzYzAwZjBiMzFiMzRjNmYznJnGZg==: --dhchap-ctrl-secret DHHC-1:01:YTE4YmM2YjBhZjFlY2Q5MWU2OGZmMjY0YjVkNWI1YTDZ19cj: 00:10:25.684 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:25.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:25.684 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:10:25.684 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.684 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.684 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.684 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:25.684 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:25.684 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:25.684 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:10:25.684 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:25.684 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:25.684 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:25.684 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:25.684 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:25.684 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key3 00:10:25.684 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.684 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.684 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.684 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:25.684 13:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:26.249 00:10:26.249 13:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:26.249 13:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:26.249 13:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:26.508 13:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:26.508 13:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:26.508 13:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.508 13:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.508 13:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.508 13:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:26.508 { 00:10:26.508 "cntlid": 39, 00:10:26.508 "qid": 0, 00:10:26.508 "state": "enabled", 00:10:26.508 "thread": "nvmf_tgt_poll_group_000", 00:10:26.508 "listen_address": { 00:10:26.508 "trtype": "TCP", 00:10:26.508 "adrfam": "IPv4", 00:10:26.508 "traddr": "10.0.0.2", 00:10:26.508 "trsvcid": "4420" 00:10:26.508 }, 00:10:26.508 "peer_address": { 00:10:26.508 "trtype": "TCP", 00:10:26.509 "adrfam": "IPv4", 00:10:26.509 "traddr": "10.0.0.1", 00:10:26.509 "trsvcid": "33580" 00:10:26.509 }, 00:10:26.509 "auth": { 00:10:26.509 "state": "completed", 00:10:26.509 "digest": "sha256", 00:10:26.509 "dhgroup": "ffdhe6144" 00:10:26.509 } 00:10:26.509 } 00:10:26.509 ]' 00:10:26.509 13:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:26.509 13:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:26.509 13:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:26.768 13:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:26.768 13:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:26.768 13:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:26.768 13:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:26.768 13:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:27.026 13:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:03:M2RlODczNzg4ZWU0MTA1OTgyMTNjZjA1ODk2ZTg4NGUyNjg1NGM4YjQ5MmEwOTZkMGYwYTZkN2E5YmE4MDUzMOYuNsI=: 00:10:27.594 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:27.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:27.594 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:10:27.594 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.594 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.594 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.594 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:27.594 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:27.594 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:27.594 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:27.853 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:10:27.853 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:27.853 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:27.853 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:27.853 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:27.853 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:27.853 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:27.853 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.853 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.853 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.853 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:27.853 13:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:28.421 00:10:28.421 13:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:28.421 13:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:28.421 13:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:28.680 13:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:28.680 13:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:28.680 13:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.680 13:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.680 13:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.680 13:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:28.680 { 00:10:28.680 "cntlid": 41, 00:10:28.680 "qid": 0, 00:10:28.680 "state": "enabled", 00:10:28.680 "thread": "nvmf_tgt_poll_group_000", 00:10:28.680 "listen_address": { 00:10:28.680 "trtype": "TCP", 00:10:28.680 "adrfam": "IPv4", 00:10:28.680 "traddr": "10.0.0.2", 00:10:28.680 "trsvcid": "4420" 00:10:28.680 }, 00:10:28.680 "peer_address": { 00:10:28.680 "trtype": "TCP", 00:10:28.680 "adrfam": "IPv4", 00:10:28.680 "traddr": "10.0.0.1", 00:10:28.680 "trsvcid": "33616" 00:10:28.680 }, 00:10:28.680 "auth": { 00:10:28.680 "state": "completed", 00:10:28.680 "digest": "sha256", 00:10:28.680 "dhgroup": "ffdhe8192" 00:10:28.680 } 00:10:28.680 } 00:10:28.680 ]' 00:10:28.680 13:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:28.680 13:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:28.680 13:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:28.680 13:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:28.680 13:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:28.680 13:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:28.680 13:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:28.680 13:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:28.940 13:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:00:Y2JhMjZjZmY2YmVjNGY4ZDdjM2VhMjE5OWM0ZmRkMTA2YTljYmQwNDRmOTM4ZDg3nZhzjA==: --dhchap-ctrl-secret DHHC-1:03:NGNiZDg5ZTkyYjhhYWUzYzc5NTU1OGMxMGU3OTVjMTg3YzZkM2IxZjc4MDQyMTUzNWQ4YTNjOTkwY2M4YTVjNYuaayI=: 00:10:29.507 13:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:29.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:29.507 13:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:10:29.507 13:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.507 13:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.507 13:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.507 13:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:29.507 13:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:29.507 13:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:30.075 13:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:10:30.075 13:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:30.075 13:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:30.075 13:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:30.075 13:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:30.075 13:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:30.075 13:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:30.075 13:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.075 13:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.075 13:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.075 13:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:30.075 13:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:30.642 00:10:30.642 13:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:30.642 13:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:30.642 13:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:30.901 13:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:30.901 13:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:30.901 13:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.901 13:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.901 13:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.901 13:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:30.901 { 00:10:30.901 "cntlid": 43, 00:10:30.901 "qid": 0, 00:10:30.901 "state": "enabled", 00:10:30.901 "thread": "nvmf_tgt_poll_group_000", 00:10:30.901 "listen_address": { 00:10:30.901 "trtype": "TCP", 00:10:30.901 "adrfam": "IPv4", 00:10:30.901 "traddr": "10.0.0.2", 00:10:30.901 "trsvcid": "4420" 00:10:30.901 }, 00:10:30.901 "peer_address": { 00:10:30.901 "trtype": "TCP", 00:10:30.901 "adrfam": "IPv4", 00:10:30.901 "traddr": "10.0.0.1", 00:10:30.901 "trsvcid": "48622" 00:10:30.901 }, 00:10:30.901 "auth": { 00:10:30.901 "state": "completed", 00:10:30.901 "digest": "sha256", 00:10:30.901 "dhgroup": "ffdhe8192" 00:10:30.901 } 00:10:30.901 } 00:10:30.901 ]' 00:10:30.901 13:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:30.901 13:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:30.901 13:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:30.901 13:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:30.901 13:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:30.901 13:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:30.901 13:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:30.901 13:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:31.158 13:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:01:NmRlYjlhNWIxYTMyZjBhNDUwMDBkYjI1MDk4MTlkMWXV+sth: --dhchap-ctrl-secret DHHC-1:02:YWQwMWM5ODdhYjZlZWI0NjZmYzllZTU4NzBmZmQ1ZGRiMWI5MmVjMTYyNDYzNGU5WMMtrw==: 00:10:31.724 13:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:31.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:31.724 13:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:10:31.724 13:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.724 13:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.724 13:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.724 13:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:31.724 13:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:31.724 13:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:31.983 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:10:31.983 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:31.983 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:31.983 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:31.983 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:31.983 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:31.983 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:31.983 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.983 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.983 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.983 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:31.983 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:32.918 00:10:32.918 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:32.918 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:32.918 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:32.918 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:32.918 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:32.918 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.918 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.918 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.918 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:32.918 { 00:10:32.918 "cntlid": 45, 00:10:32.918 "qid": 0, 00:10:32.918 "state": "enabled", 00:10:32.918 "thread": "nvmf_tgt_poll_group_000", 00:10:32.918 "listen_address": { 00:10:32.918 "trtype": "TCP", 00:10:32.918 "adrfam": "IPv4", 00:10:32.918 "traddr": "10.0.0.2", 00:10:32.918 "trsvcid": "4420" 00:10:32.918 }, 00:10:32.918 "peer_address": { 00:10:32.918 "trtype": "TCP", 00:10:32.918 "adrfam": "IPv4", 00:10:32.918 "traddr": "10.0.0.1", 00:10:32.918 "trsvcid": "48646" 00:10:32.918 }, 00:10:32.918 "auth": { 00:10:32.918 "state": "completed", 00:10:32.918 "digest": "sha256", 00:10:32.918 "dhgroup": "ffdhe8192" 00:10:32.918 } 00:10:32.918 } 00:10:32.918 ]' 00:10:32.918 13:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:32.918 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:32.918 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:32.918 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:32.918 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:33.176 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:33.176 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:33.176 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:33.452 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:02:MjE3MmY1ZmUzMGU5MGY2NWUxYzcxNGI5ZWNmNzA0ZmIzYzAwZjBiMzFiMzRjNmYznJnGZg==: --dhchap-ctrl-secret DHHC-1:01:YTE4YmM2YjBhZjFlY2Q5MWU2OGZmMjY0YjVkNWI1YTDZ19cj: 00:10:34.063 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:34.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:34.063 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:10:34.063 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.063 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.063 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.063 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:34.063 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:34.063 13:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:34.063 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:10:34.063 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:34.063 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:34.063 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:34.063 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:34.063 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:34.063 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key3 00:10:34.063 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.063 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.322 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.322 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:34.323 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:34.889 00:10:34.889 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:34.889 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:34.889 13:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:35.147 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:35.147 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:35.147 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.147 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.147 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.147 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:35.147 { 00:10:35.147 "cntlid": 47, 00:10:35.147 "qid": 0, 00:10:35.147 "state": "enabled", 00:10:35.147 "thread": "nvmf_tgt_poll_group_000", 00:10:35.147 "listen_address": { 00:10:35.147 "trtype": "TCP", 00:10:35.147 "adrfam": "IPv4", 00:10:35.147 "traddr": "10.0.0.2", 00:10:35.147 "trsvcid": "4420" 00:10:35.147 }, 00:10:35.147 "peer_address": { 00:10:35.147 "trtype": "TCP", 00:10:35.147 "adrfam": "IPv4", 00:10:35.147 "traddr": "10.0.0.1", 00:10:35.147 "trsvcid": "48672" 00:10:35.147 }, 00:10:35.147 "auth": { 00:10:35.147 "state": "completed", 00:10:35.147 "digest": "sha256", 00:10:35.147 "dhgroup": "ffdhe8192" 00:10:35.147 } 00:10:35.147 } 00:10:35.147 ]' 00:10:35.147 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:35.147 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:35.147 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:35.147 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:35.147 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:35.147 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:35.147 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:35.147 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:35.405 13:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:03:M2RlODczNzg4ZWU0MTA1OTgyMTNjZjA1ODk2ZTg4NGUyNjg1NGM4YjQ5MmEwOTZkMGYwYTZkN2E5YmE4MDUzMOYuNsI=: 00:10:36.340 13:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:36.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:36.340 13:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:10:36.340 13:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.340 13:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.340 13:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.340 13:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:10:36.340 13:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:36.340 13:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:36.340 13:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:36.340 13:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:36.340 13:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:10:36.340 13:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:36.340 13:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:36.340 13:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:36.340 13:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:36.340 13:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:36.340 13:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:36.340 13:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.340 13:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.340 13:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.340 13:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:36.340 13:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:36.906 00:10:36.906 13:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:36.906 13:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:36.906 13:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:37.164 13:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:37.164 13:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:37.164 13:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.164 13:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.164 13:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.164 13:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:37.164 { 00:10:37.164 "cntlid": 49, 00:10:37.164 "qid": 0, 00:10:37.164 "state": "enabled", 00:10:37.164 "thread": "nvmf_tgt_poll_group_000", 00:10:37.164 "listen_address": { 00:10:37.164 "trtype": "TCP", 00:10:37.164 "adrfam": "IPv4", 00:10:37.164 "traddr": "10.0.0.2", 00:10:37.164 "trsvcid": "4420" 00:10:37.164 }, 00:10:37.164 "peer_address": { 00:10:37.164 "trtype": "TCP", 00:10:37.164 "adrfam": "IPv4", 00:10:37.164 "traddr": "10.0.0.1", 00:10:37.164 "trsvcid": "48702" 00:10:37.164 }, 00:10:37.164 "auth": { 00:10:37.164 "state": "completed", 00:10:37.164 "digest": "sha384", 00:10:37.164 "dhgroup": "null" 00:10:37.164 } 00:10:37.164 } 00:10:37.164 ]' 00:10:37.164 13:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:37.164 13:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:37.164 13:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:37.164 13:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:37.164 13:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:37.164 13:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:37.164 13:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:37.164 13:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:37.423 13:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:00:Y2JhMjZjZmY2YmVjNGY4ZDdjM2VhMjE5OWM0ZmRkMTA2YTljYmQwNDRmOTM4ZDg3nZhzjA==: --dhchap-ctrl-secret DHHC-1:03:NGNiZDg5ZTkyYjhhYWUzYzc5NTU1OGMxMGU3OTVjMTg3YzZkM2IxZjc4MDQyMTUzNWQ4YTNjOTkwY2M4YTVjNYuaayI=: 00:10:38.015 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:38.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:38.015 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:10:38.015 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.015 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.015 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.015 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:38.015 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:38.015 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:38.273 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:10:38.273 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:38.273 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:38.273 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:38.273 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:38.273 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:38.273 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:38.273 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.273 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.273 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.273 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:38.273 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:38.531 00:10:38.531 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:38.531 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:38.531 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:38.789 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:38.789 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:38.789 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.789 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.789 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.789 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:38.789 { 00:10:38.789 "cntlid": 51, 00:10:38.789 "qid": 0, 00:10:38.789 "state": "enabled", 00:10:38.789 "thread": "nvmf_tgt_poll_group_000", 00:10:38.789 "listen_address": { 00:10:38.789 "trtype": "TCP", 00:10:38.789 "adrfam": "IPv4", 00:10:38.789 "traddr": "10.0.0.2", 00:10:38.789 "trsvcid": "4420" 00:10:38.789 }, 00:10:38.789 "peer_address": { 00:10:38.789 "trtype": "TCP", 00:10:38.789 "adrfam": "IPv4", 00:10:38.789 "traddr": "10.0.0.1", 00:10:38.789 "trsvcid": "48734" 00:10:38.789 }, 00:10:38.789 "auth": { 00:10:38.789 "state": "completed", 00:10:38.789 "digest": "sha384", 00:10:38.789 "dhgroup": "null" 00:10:38.789 } 00:10:38.789 } 00:10:38.789 ]' 00:10:38.789 13:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:39.047 13:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:39.047 13:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:39.047 13:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:39.047 13:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:39.047 13:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:39.047 13:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:39.047 13:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:39.305 13:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:01:NmRlYjlhNWIxYTMyZjBhNDUwMDBkYjI1MDk4MTlkMWXV+sth: --dhchap-ctrl-secret DHHC-1:02:YWQwMWM5ODdhYjZlZWI0NjZmYzllZTU4NzBmZmQ1ZGRiMWI5MmVjMTYyNDYzNGU5WMMtrw==: 00:10:39.870 13:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:39.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:39.870 13:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:10:39.870 13:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.870 13:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.870 13:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.870 13:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:39.870 13:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:39.870 13:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:40.129 13:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:10:40.129 13:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:40.129 13:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:40.129 13:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:40.129 13:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:40.129 13:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:40.129 13:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.129 13:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.129 13:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.129 13:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.129 13:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.129 13:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.387 00:10:40.387 13:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:40.387 13:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:40.387 13:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:40.645 13:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:40.645 13:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:40.645 13:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.645 13:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.645 13:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.645 13:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:40.645 { 00:10:40.645 "cntlid": 53, 00:10:40.645 "qid": 0, 00:10:40.645 "state": "enabled", 00:10:40.645 "thread": "nvmf_tgt_poll_group_000", 00:10:40.645 "listen_address": { 00:10:40.645 "trtype": "TCP", 00:10:40.645 "adrfam": "IPv4", 00:10:40.645 "traddr": "10.0.0.2", 00:10:40.645 "trsvcid": "4420" 00:10:40.645 }, 00:10:40.645 "peer_address": { 00:10:40.645 "trtype": "TCP", 00:10:40.645 "adrfam": "IPv4", 00:10:40.645 "traddr": "10.0.0.1", 00:10:40.645 "trsvcid": "54424" 00:10:40.645 }, 00:10:40.645 "auth": { 00:10:40.645 "state": "completed", 00:10:40.645 "digest": "sha384", 00:10:40.645 "dhgroup": "null" 00:10:40.645 } 00:10:40.645 } 00:10:40.645 ]' 00:10:40.645 13:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:40.645 13:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:40.645 13:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:40.645 13:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:40.645 13:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:40.903 13:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:40.903 13:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:40.903 13:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:41.160 13:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:02:MjE3MmY1ZmUzMGU5MGY2NWUxYzcxNGI5ZWNmNzA0ZmIzYzAwZjBiMzFiMzRjNmYznJnGZg==: --dhchap-ctrl-secret DHHC-1:01:YTE4YmM2YjBhZjFlY2Q5MWU2OGZmMjY0YjVkNWI1YTDZ19cj: 00:10:41.724 13:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:41.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:41.724 13:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:10:41.724 13:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.724 13:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.724 13:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.724 13:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:41.724 13:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:41.724 13:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:41.724 13:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:10:41.724 13:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:41.724 13:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:41.724 13:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:41.724 13:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:41.724 13:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:41.724 13:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key3 00:10:41.724 13:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.724 13:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.724 13:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.724 13:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:41.724 13:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:42.290 00:10:42.290 13:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:42.290 13:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:42.290 13:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:42.548 13:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:42.548 13:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:42.548 13:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.548 13:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.548 13:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.548 13:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:42.548 { 00:10:42.548 "cntlid": 55, 00:10:42.548 "qid": 0, 00:10:42.548 "state": "enabled", 00:10:42.548 "thread": "nvmf_tgt_poll_group_000", 00:10:42.548 "listen_address": { 00:10:42.548 "trtype": "TCP", 00:10:42.548 "adrfam": "IPv4", 00:10:42.548 "traddr": "10.0.0.2", 00:10:42.548 "trsvcid": "4420" 00:10:42.548 }, 00:10:42.548 "peer_address": { 00:10:42.548 "trtype": "TCP", 00:10:42.548 "adrfam": "IPv4", 00:10:42.548 "traddr": "10.0.0.1", 00:10:42.548 "trsvcid": "54438" 00:10:42.548 }, 00:10:42.548 "auth": { 00:10:42.548 "state": "completed", 00:10:42.548 "digest": "sha384", 00:10:42.548 "dhgroup": "null" 00:10:42.548 } 00:10:42.548 } 00:10:42.548 ]' 00:10:42.548 13:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:42.548 13:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:42.548 13:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:42.548 13:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:42.548 13:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:42.548 13:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:42.548 13:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:42.548 13:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:42.806 13:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:03:M2RlODczNzg4ZWU0MTA1OTgyMTNjZjA1ODk2ZTg4NGUyNjg1NGM4YjQ5MmEwOTZkMGYwYTZkN2E5YmE4MDUzMOYuNsI=: 00:10:43.374 13:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:43.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:43.374 13:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:10:43.374 13:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.374 13:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.631 13:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.631 13:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:43.631 13:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:43.631 13:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:43.631 13:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:43.890 13:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:10:43.890 13:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:43.890 13:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:43.890 13:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:43.890 13:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:43.890 13:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:43.890 13:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:43.890 13:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.890 13:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.890 13:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.890 13:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:43.890 13:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.149 00:10:44.149 13:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:44.149 13:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:44.149 13:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:44.407 13:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:44.407 13:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:44.407 13:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.407 13:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.407 13:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.407 13:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:44.407 { 00:10:44.407 "cntlid": 57, 00:10:44.407 "qid": 0, 00:10:44.407 "state": "enabled", 00:10:44.407 "thread": "nvmf_tgt_poll_group_000", 00:10:44.407 "listen_address": { 00:10:44.407 "trtype": "TCP", 00:10:44.407 "adrfam": "IPv4", 00:10:44.407 "traddr": "10.0.0.2", 00:10:44.407 "trsvcid": "4420" 00:10:44.407 }, 00:10:44.407 "peer_address": { 00:10:44.407 "trtype": "TCP", 00:10:44.407 "adrfam": "IPv4", 00:10:44.407 "traddr": "10.0.0.1", 00:10:44.407 "trsvcid": "54452" 00:10:44.407 }, 00:10:44.407 "auth": { 00:10:44.407 "state": "completed", 00:10:44.407 "digest": "sha384", 00:10:44.407 "dhgroup": "ffdhe2048" 00:10:44.407 } 00:10:44.407 } 00:10:44.407 ]' 00:10:44.407 13:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:44.407 13:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:44.407 13:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:44.407 13:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:44.407 13:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:44.407 13:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:44.407 13:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:44.407 13:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:44.665 13:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:00:Y2JhMjZjZmY2YmVjNGY4ZDdjM2VhMjE5OWM0ZmRkMTA2YTljYmQwNDRmOTM4ZDg3nZhzjA==: --dhchap-ctrl-secret DHHC-1:03:NGNiZDg5ZTkyYjhhYWUzYzc5NTU1OGMxMGU3OTVjMTg3YzZkM2IxZjc4MDQyMTUzNWQ4YTNjOTkwY2M4YTVjNYuaayI=: 00:10:45.600 13:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:45.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:45.600 13:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:10:45.600 13:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.600 13:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.600 13:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.600 13:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:45.600 13:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:45.600 13:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:45.600 13:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:10:45.600 13:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:45.600 13:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:45.600 13:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:45.600 13:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:45.600 13:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:45.600 13:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:45.600 13:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.600 13:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.600 13:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.600 13:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:45.600 13:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.167 00:10:46.167 13:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:46.167 13:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:46.167 13:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:46.167 13:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:46.167 13:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:46.167 13:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.167 13:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.167 13:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.167 13:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:46.167 { 00:10:46.167 "cntlid": 59, 00:10:46.167 "qid": 0, 00:10:46.167 "state": "enabled", 00:10:46.167 "thread": "nvmf_tgt_poll_group_000", 00:10:46.167 "listen_address": { 00:10:46.167 "trtype": "TCP", 00:10:46.167 "adrfam": "IPv4", 00:10:46.167 "traddr": "10.0.0.2", 00:10:46.167 "trsvcid": "4420" 00:10:46.167 }, 00:10:46.167 "peer_address": { 00:10:46.167 "trtype": "TCP", 00:10:46.167 "adrfam": "IPv4", 00:10:46.167 "traddr": "10.0.0.1", 00:10:46.168 "trsvcid": "54466" 00:10:46.168 }, 00:10:46.168 "auth": { 00:10:46.168 "state": "completed", 00:10:46.168 "digest": "sha384", 00:10:46.168 "dhgroup": "ffdhe2048" 00:10:46.168 } 00:10:46.168 } 00:10:46.168 ]' 00:10:46.168 13:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:46.425 13:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:46.425 13:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:46.425 13:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:46.425 13:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:46.425 13:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:46.425 13:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:46.425 13:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:46.683 13:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:01:NmRlYjlhNWIxYTMyZjBhNDUwMDBkYjI1MDk4MTlkMWXV+sth: --dhchap-ctrl-secret DHHC-1:02:YWQwMWM5ODdhYjZlZWI0NjZmYzllZTU4NzBmZmQ1ZGRiMWI5MmVjMTYyNDYzNGU5WMMtrw==: 00:10:47.249 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:47.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:47.249 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:10:47.249 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.249 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.249 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.249 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:47.249 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:47.249 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:47.507 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:10:47.507 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:47.507 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:47.507 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:47.507 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:47.507 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:47.507 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:47.507 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.507 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.765 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.765 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:47.765 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.023 00:10:48.023 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:48.023 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:48.023 13:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:48.280 13:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:48.280 13:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:48.280 13:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.280 13:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.280 13:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.280 13:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:48.280 { 00:10:48.280 "cntlid": 61, 00:10:48.280 "qid": 0, 00:10:48.280 "state": "enabled", 00:10:48.280 "thread": "nvmf_tgt_poll_group_000", 00:10:48.280 "listen_address": { 00:10:48.280 "trtype": "TCP", 00:10:48.280 "adrfam": "IPv4", 00:10:48.280 "traddr": "10.0.0.2", 00:10:48.280 "trsvcid": "4420" 00:10:48.280 }, 00:10:48.280 "peer_address": { 00:10:48.280 "trtype": "TCP", 00:10:48.280 "adrfam": "IPv4", 00:10:48.280 "traddr": "10.0.0.1", 00:10:48.280 "trsvcid": "54498" 00:10:48.280 }, 00:10:48.280 "auth": { 00:10:48.280 "state": "completed", 00:10:48.280 "digest": "sha384", 00:10:48.280 "dhgroup": "ffdhe2048" 00:10:48.280 } 00:10:48.280 } 00:10:48.280 ]' 00:10:48.281 13:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:48.281 13:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:48.281 13:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:48.281 13:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:48.281 13:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:48.281 13:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:48.281 13:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:48.281 13:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:48.538 13:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:02:MjE3MmY1ZmUzMGU5MGY2NWUxYzcxNGI5ZWNmNzA0ZmIzYzAwZjBiMzFiMzRjNmYznJnGZg==: --dhchap-ctrl-secret DHHC-1:01:YTE4YmM2YjBhZjFlY2Q5MWU2OGZmMjY0YjVkNWI1YTDZ19cj: 00:10:49.473 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:49.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:49.473 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:10:49.473 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.473 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.473 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.473 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:49.473 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:49.473 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:49.473 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:10:49.473 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:49.473 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:49.473 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:49.473 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:49.473 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:49.473 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key3 00:10:49.473 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.473 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.473 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.473 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:49.473 13:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:50.039 00:10:50.039 13:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:50.039 13:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:50.039 13:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:50.297 13:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:50.297 13:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:50.297 13:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.297 13:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.297 13:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.297 13:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:50.297 { 00:10:50.297 "cntlid": 63, 00:10:50.297 "qid": 0, 00:10:50.297 "state": "enabled", 00:10:50.297 "thread": "nvmf_tgt_poll_group_000", 00:10:50.297 "listen_address": { 00:10:50.297 "trtype": "TCP", 00:10:50.297 "adrfam": "IPv4", 00:10:50.297 "traddr": "10.0.0.2", 00:10:50.297 "trsvcid": "4420" 00:10:50.297 }, 00:10:50.297 "peer_address": { 00:10:50.297 "trtype": "TCP", 00:10:50.297 "adrfam": "IPv4", 00:10:50.297 "traddr": "10.0.0.1", 00:10:50.297 "trsvcid": "39686" 00:10:50.297 }, 00:10:50.297 "auth": { 00:10:50.297 "state": "completed", 00:10:50.297 "digest": "sha384", 00:10:50.297 "dhgroup": "ffdhe2048" 00:10:50.297 } 00:10:50.297 } 00:10:50.297 ]' 00:10:50.297 13:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:50.297 13:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:50.297 13:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:50.297 13:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:50.297 13:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:50.297 13:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:50.297 13:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:50.297 13:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:50.555 13:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:03:M2RlODczNzg4ZWU0MTA1OTgyMTNjZjA1ODk2ZTg4NGUyNjg1NGM4YjQ5MmEwOTZkMGYwYTZkN2E5YmE4MDUzMOYuNsI=: 00:10:51.489 13:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:51.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:51.489 13:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:10:51.489 13:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.489 13:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.489 13:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.489 13:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:51.489 13:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:51.489 13:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:51.489 13:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:51.489 13:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:10:51.489 13:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:51.489 13:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:51.489 13:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:51.489 13:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:51.489 13:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:51.489 13:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:51.489 13:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.489 13:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.489 13:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.489 13:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:51.489 13:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.054 00:10:52.054 13:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:52.054 13:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:52.054 13:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:52.054 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:52.054 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:52.054 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.054 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.054 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.054 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:52.054 { 00:10:52.054 "cntlid": 65, 00:10:52.054 "qid": 0, 00:10:52.054 "state": "enabled", 00:10:52.054 "thread": "nvmf_tgt_poll_group_000", 00:10:52.054 "listen_address": { 00:10:52.054 "trtype": "TCP", 00:10:52.054 "adrfam": "IPv4", 00:10:52.054 "traddr": "10.0.0.2", 00:10:52.054 "trsvcid": "4420" 00:10:52.054 }, 00:10:52.054 "peer_address": { 00:10:52.054 "trtype": "TCP", 00:10:52.054 "adrfam": "IPv4", 00:10:52.054 "traddr": "10.0.0.1", 00:10:52.054 "trsvcid": "39714" 00:10:52.054 }, 00:10:52.054 "auth": { 00:10:52.054 "state": "completed", 00:10:52.054 "digest": "sha384", 00:10:52.054 "dhgroup": "ffdhe3072" 00:10:52.054 } 00:10:52.054 } 00:10:52.054 ]' 00:10:52.054 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:52.311 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:52.311 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:52.311 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:52.311 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:52.311 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:52.311 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:52.311 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:52.569 13:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:00:Y2JhMjZjZmY2YmVjNGY4ZDdjM2VhMjE5OWM0ZmRkMTA2YTljYmQwNDRmOTM4ZDg3nZhzjA==: --dhchap-ctrl-secret DHHC-1:03:NGNiZDg5ZTkyYjhhYWUzYzc5NTU1OGMxMGU3OTVjMTg3YzZkM2IxZjc4MDQyMTUzNWQ4YTNjOTkwY2M4YTVjNYuaayI=: 00:10:53.136 13:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:53.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:53.136 13:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:10:53.136 13:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.136 13:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.136 13:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.136 13:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:53.136 13:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:53.136 13:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:53.394 13:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:10:53.394 13:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:53.394 13:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:53.394 13:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:53.394 13:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:53.394 13:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:53.394 13:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:53.394 13:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.394 13:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.394 13:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.394 13:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:53.394 13:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:53.652 00:10:53.652 13:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:53.652 13:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:53.652 13:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:53.909 13:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:53.909 13:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:53.909 13:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.909 13:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.909 13:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.909 13:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:53.909 { 00:10:53.909 "cntlid": 67, 00:10:53.909 "qid": 0, 00:10:53.909 "state": "enabled", 00:10:53.909 "thread": "nvmf_tgt_poll_group_000", 00:10:53.909 "listen_address": { 00:10:53.909 "trtype": "TCP", 00:10:53.909 "adrfam": "IPv4", 00:10:53.909 "traddr": "10.0.0.2", 00:10:53.909 "trsvcid": "4420" 00:10:53.909 }, 00:10:53.909 "peer_address": { 00:10:53.909 "trtype": "TCP", 00:10:53.909 "adrfam": "IPv4", 00:10:53.909 "traddr": "10.0.0.1", 00:10:53.909 "trsvcid": "39740" 00:10:53.910 }, 00:10:53.910 "auth": { 00:10:53.910 "state": "completed", 00:10:53.910 "digest": "sha384", 00:10:53.910 "dhgroup": "ffdhe3072" 00:10:53.910 } 00:10:53.910 } 00:10:53.910 ]' 00:10:53.910 13:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:54.167 13:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:54.167 13:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:54.167 13:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:54.167 13:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:54.167 13:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:54.167 13:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:54.167 13:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:54.425 13:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:01:NmRlYjlhNWIxYTMyZjBhNDUwMDBkYjI1MDk4MTlkMWXV+sth: --dhchap-ctrl-secret DHHC-1:02:YWQwMWM5ODdhYjZlZWI0NjZmYzllZTU4NzBmZmQ1ZGRiMWI5MmVjMTYyNDYzNGU5WMMtrw==: 00:10:54.990 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:54.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:54.990 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:10:54.990 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.990 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.990 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.990 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:54.990 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:54.990 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:55.248 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:10:55.248 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:55.248 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:55.248 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:55.248 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:55.248 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:55.248 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:55.248 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.248 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.248 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.248 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:55.248 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:55.506 00:10:55.506 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:55.506 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:55.506 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:55.765 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:55.765 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:55.765 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.765 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.765 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.765 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:55.765 { 00:10:55.765 "cntlid": 69, 00:10:55.765 "qid": 0, 00:10:55.765 "state": "enabled", 00:10:55.765 "thread": "nvmf_tgt_poll_group_000", 00:10:55.765 "listen_address": { 00:10:55.765 "trtype": "TCP", 00:10:55.765 "adrfam": "IPv4", 00:10:55.765 "traddr": "10.0.0.2", 00:10:55.765 "trsvcid": "4420" 00:10:55.765 }, 00:10:55.765 "peer_address": { 00:10:55.765 "trtype": "TCP", 00:10:55.765 "adrfam": "IPv4", 00:10:55.765 "traddr": "10.0.0.1", 00:10:55.765 "trsvcid": "39772" 00:10:55.765 }, 00:10:55.765 "auth": { 00:10:55.765 "state": "completed", 00:10:55.765 "digest": "sha384", 00:10:55.765 "dhgroup": "ffdhe3072" 00:10:55.765 } 00:10:55.765 } 00:10:55.765 ]' 00:10:55.765 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:56.023 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:56.023 13:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:56.023 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:56.023 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:56.023 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:56.023 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:56.023 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:56.281 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:02:MjE3MmY1ZmUzMGU5MGY2NWUxYzcxNGI5ZWNmNzA0ZmIzYzAwZjBiMzFiMzRjNmYznJnGZg==: --dhchap-ctrl-secret DHHC-1:01:YTE4YmM2YjBhZjFlY2Q5MWU2OGZmMjY0YjVkNWI1YTDZ19cj: 00:10:56.850 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:56.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:56.850 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:10:56.850 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.850 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.850 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.850 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:56.850 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:56.850 13:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:57.108 13:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:10:57.108 13:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:57.108 13:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:57.108 13:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:57.108 13:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:57.108 13:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:57.108 13:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key3 00:10:57.108 13:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.108 13:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.108 13:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.108 13:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:57.108 13:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:57.675 00:10:57.675 13:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:57.675 13:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:57.675 13:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:57.934 13:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:57.934 13:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:57.934 13:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.934 13:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.934 13:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.934 13:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:57.934 { 00:10:57.934 "cntlid": 71, 00:10:57.934 "qid": 0, 00:10:57.934 "state": "enabled", 00:10:57.934 "thread": "nvmf_tgt_poll_group_000", 00:10:57.934 "listen_address": { 00:10:57.934 "trtype": "TCP", 00:10:57.934 "adrfam": "IPv4", 00:10:57.934 "traddr": "10.0.0.2", 00:10:57.934 "trsvcid": "4420" 00:10:57.934 }, 00:10:57.934 "peer_address": { 00:10:57.934 "trtype": "TCP", 00:10:57.934 "adrfam": "IPv4", 00:10:57.934 "traddr": "10.0.0.1", 00:10:57.934 "trsvcid": "39820" 00:10:57.934 }, 00:10:57.934 "auth": { 00:10:57.934 "state": "completed", 00:10:57.934 "digest": "sha384", 00:10:57.934 "dhgroup": "ffdhe3072" 00:10:57.934 } 00:10:57.934 } 00:10:57.934 ]' 00:10:57.934 13:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:57.934 13:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:57.934 13:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:57.934 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:57.934 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:57.934 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:57.934 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:57.934 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:58.192 13:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:03:M2RlODczNzg4ZWU0MTA1OTgyMTNjZjA1ODk2ZTg4NGUyNjg1NGM4YjQ5MmEwOTZkMGYwYTZkN2E5YmE4MDUzMOYuNsI=: 00:10:59.127 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:59.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:59.127 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:10:59.127 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.127 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.127 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.127 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:59.127 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:59.127 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:59.127 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:59.127 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:10:59.127 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:59.127 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:59.127 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:59.127 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:59.127 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:59.127 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:59.127 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.127 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.127 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.127 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:59.127 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:59.693 00:10:59.693 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:59.693 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:59.693 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:59.952 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:59.952 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:59.952 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.952 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.952 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.952 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:59.952 { 00:10:59.952 "cntlid": 73, 00:10:59.952 "qid": 0, 00:10:59.952 "state": "enabled", 00:10:59.952 "thread": "nvmf_tgt_poll_group_000", 00:10:59.952 "listen_address": { 00:10:59.952 "trtype": "TCP", 00:10:59.952 "adrfam": "IPv4", 00:10:59.952 "traddr": "10.0.0.2", 00:10:59.952 "trsvcid": "4420" 00:10:59.952 }, 00:10:59.952 "peer_address": { 00:10:59.952 "trtype": "TCP", 00:10:59.952 "adrfam": "IPv4", 00:10:59.952 "traddr": "10.0.0.1", 00:10:59.952 "trsvcid": "39852" 00:10:59.952 }, 00:10:59.952 "auth": { 00:10:59.952 "state": "completed", 00:10:59.952 "digest": "sha384", 00:10:59.952 "dhgroup": "ffdhe4096" 00:10:59.952 } 00:10:59.952 } 00:10:59.952 ]' 00:10:59.952 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:59.952 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:59.952 13:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:59.952 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:59.952 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:59.952 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:59.952 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:59.952 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:00.211 13:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:00:Y2JhMjZjZmY2YmVjNGY4ZDdjM2VhMjE5OWM0ZmRkMTA2YTljYmQwNDRmOTM4ZDg3nZhzjA==: --dhchap-ctrl-secret DHHC-1:03:NGNiZDg5ZTkyYjhhYWUzYzc5NTU1OGMxMGU3OTVjMTg3YzZkM2IxZjc4MDQyMTUzNWQ4YTNjOTkwY2M4YTVjNYuaayI=: 00:11:01.146 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:01.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:01.146 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:11:01.146 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.146 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.146 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.146 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:01.146 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:01.146 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:01.146 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:11:01.146 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:01.146 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:01.146 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:01.146 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:01.146 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:01.146 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.146 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.146 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.403 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.403 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.403 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.661 00:11:01.661 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:01.661 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:01.661 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:01.918 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:01.918 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:01.918 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.918 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.918 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.918 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:01.918 { 00:11:01.918 "cntlid": 75, 00:11:01.918 "qid": 0, 00:11:01.918 "state": "enabled", 00:11:01.918 "thread": "nvmf_tgt_poll_group_000", 00:11:01.918 "listen_address": { 00:11:01.918 "trtype": "TCP", 00:11:01.918 "adrfam": "IPv4", 00:11:01.918 "traddr": "10.0.0.2", 00:11:01.918 "trsvcid": "4420" 00:11:01.918 }, 00:11:01.918 "peer_address": { 00:11:01.918 "trtype": "TCP", 00:11:01.918 "adrfam": "IPv4", 00:11:01.918 "traddr": "10.0.0.1", 00:11:01.918 "trsvcid": "60098" 00:11:01.918 }, 00:11:01.918 "auth": { 00:11:01.918 "state": "completed", 00:11:01.918 "digest": "sha384", 00:11:01.918 "dhgroup": "ffdhe4096" 00:11:01.918 } 00:11:01.918 } 00:11:01.918 ]' 00:11:01.918 13:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:01.918 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:01.918 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:01.919 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:01.919 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:02.177 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:02.177 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:02.177 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:02.435 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:01:NmRlYjlhNWIxYTMyZjBhNDUwMDBkYjI1MDk4MTlkMWXV+sth: --dhchap-ctrl-secret DHHC-1:02:YWQwMWM5ODdhYjZlZWI0NjZmYzllZTU4NzBmZmQ1ZGRiMWI5MmVjMTYyNDYzNGU5WMMtrw==: 00:11:03.000 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:03.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:03.000 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:11:03.000 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.000 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.000 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.000 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:03.000 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:03.000 13:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:03.258 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:11:03.258 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:03.258 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:03.258 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:03.258 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:03.258 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:03.258 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:03.258 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.258 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.258 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.258 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:03.258 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:03.516 00:11:03.516 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:03.516 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:03.516 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:03.774 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:03.774 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:03.774 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.774 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.774 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.774 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:03.774 { 00:11:03.774 "cntlid": 77, 00:11:03.774 "qid": 0, 00:11:03.774 "state": "enabled", 00:11:03.774 "thread": "nvmf_tgt_poll_group_000", 00:11:03.774 "listen_address": { 00:11:03.774 "trtype": "TCP", 00:11:03.774 "adrfam": "IPv4", 00:11:03.774 "traddr": "10.0.0.2", 00:11:03.774 "trsvcid": "4420" 00:11:03.774 }, 00:11:03.774 "peer_address": { 00:11:03.774 "trtype": "TCP", 00:11:03.774 "adrfam": "IPv4", 00:11:03.774 "traddr": "10.0.0.1", 00:11:03.774 "trsvcid": "60128" 00:11:03.774 }, 00:11:03.774 "auth": { 00:11:03.774 "state": "completed", 00:11:03.774 "digest": "sha384", 00:11:03.774 "dhgroup": "ffdhe4096" 00:11:03.774 } 00:11:03.774 } 00:11:03.774 ]' 00:11:03.774 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:04.032 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:04.032 13:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:04.032 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:04.032 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:04.032 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:04.032 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:04.032 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:04.290 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:02:MjE3MmY1ZmUzMGU5MGY2NWUxYzcxNGI5ZWNmNzA0ZmIzYzAwZjBiMzFiMzRjNmYznJnGZg==: --dhchap-ctrl-secret DHHC-1:01:YTE4YmM2YjBhZjFlY2Q5MWU2OGZmMjY0YjVkNWI1YTDZ19cj: 00:11:04.854 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:04.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:04.854 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:11:04.854 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.854 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.854 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.854 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:04.854 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:04.854 13:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:05.116 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:11:05.116 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:05.116 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:05.116 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:05.116 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:05.116 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:05.116 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key3 00:11:05.116 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.116 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.116 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.116 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:05.116 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:05.687 00:11:05.687 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:05.687 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:05.687 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:05.687 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:05.687 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:05.687 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.687 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.687 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.687 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:05.687 { 00:11:05.687 "cntlid": 79, 00:11:05.687 "qid": 0, 00:11:05.687 "state": "enabled", 00:11:05.687 "thread": "nvmf_tgt_poll_group_000", 00:11:05.687 "listen_address": { 00:11:05.687 "trtype": "TCP", 00:11:05.687 "adrfam": "IPv4", 00:11:05.687 "traddr": "10.0.0.2", 00:11:05.687 "trsvcid": "4420" 00:11:05.687 }, 00:11:05.687 "peer_address": { 00:11:05.687 "trtype": "TCP", 00:11:05.687 "adrfam": "IPv4", 00:11:05.687 "traddr": "10.0.0.1", 00:11:05.687 "trsvcid": "60158" 00:11:05.687 }, 00:11:05.687 "auth": { 00:11:05.687 "state": "completed", 00:11:05.687 "digest": "sha384", 00:11:05.687 "dhgroup": "ffdhe4096" 00:11:05.687 } 00:11:05.687 } 00:11:05.687 ]' 00:11:05.687 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:05.945 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:05.945 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:05.945 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:05.945 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:05.945 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:05.945 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:05.945 13:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:06.203 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:03:M2RlODczNzg4ZWU0MTA1OTgyMTNjZjA1ODk2ZTg4NGUyNjg1NGM4YjQ5MmEwOTZkMGYwYTZkN2E5YmE4MDUzMOYuNsI=: 00:11:06.769 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:06.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:06.769 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:11:06.769 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.769 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.769 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.769 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:06.769 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:06.769 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:06.769 13:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:07.027 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:11:07.027 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:07.027 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:07.027 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:07.027 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:07.027 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:07.027 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.027 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.027 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.027 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.027 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.027 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.593 00:11:07.593 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:07.593 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:07.593 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:07.851 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.851 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.851 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.851 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.851 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.851 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:07.851 { 00:11:07.851 "cntlid": 81, 00:11:07.851 "qid": 0, 00:11:07.851 "state": "enabled", 00:11:07.851 "thread": "nvmf_tgt_poll_group_000", 00:11:07.851 "listen_address": { 00:11:07.851 "trtype": "TCP", 00:11:07.851 "adrfam": "IPv4", 00:11:07.851 "traddr": "10.0.0.2", 00:11:07.851 "trsvcid": "4420" 00:11:07.851 }, 00:11:07.851 "peer_address": { 00:11:07.851 "trtype": "TCP", 00:11:07.851 "adrfam": "IPv4", 00:11:07.851 "traddr": "10.0.0.1", 00:11:07.851 "trsvcid": "60184" 00:11:07.851 }, 00:11:07.851 "auth": { 00:11:07.851 "state": "completed", 00:11:07.851 "digest": "sha384", 00:11:07.851 "dhgroup": "ffdhe6144" 00:11:07.851 } 00:11:07.851 } 00:11:07.851 ]' 00:11:07.851 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:07.851 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:07.851 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:07.851 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:07.851 13:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:07.851 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.851 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.851 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:08.109 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:00:Y2JhMjZjZmY2YmVjNGY4ZDdjM2VhMjE5OWM0ZmRkMTA2YTljYmQwNDRmOTM4ZDg3nZhzjA==: --dhchap-ctrl-secret DHHC-1:03:NGNiZDg5ZTkyYjhhYWUzYzc5NTU1OGMxMGU3OTVjMTg3YzZkM2IxZjc4MDQyMTUzNWQ4YTNjOTkwY2M4YTVjNYuaayI=: 00:11:09.042 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:09.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:09.042 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:11:09.042 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.042 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.042 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.042 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:09.042 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:09.042 13:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:09.042 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:11:09.042 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:09.042 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:09.042 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:09.042 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:09.042 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:09.042 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:09.042 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.042 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.042 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.042 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:09.042 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:09.607 00:11:09.607 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:09.607 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:09.607 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:09.865 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:09.865 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:09.865 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.865 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.865 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.865 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:09.865 { 00:11:09.865 "cntlid": 83, 00:11:09.865 "qid": 0, 00:11:09.865 "state": "enabled", 00:11:09.865 "thread": "nvmf_tgt_poll_group_000", 00:11:09.865 "listen_address": { 00:11:09.865 "trtype": "TCP", 00:11:09.865 "adrfam": "IPv4", 00:11:09.865 "traddr": "10.0.0.2", 00:11:09.865 "trsvcid": "4420" 00:11:09.865 }, 00:11:09.865 "peer_address": { 00:11:09.865 "trtype": "TCP", 00:11:09.865 "adrfam": "IPv4", 00:11:09.865 "traddr": "10.0.0.1", 00:11:09.865 "trsvcid": "60220" 00:11:09.865 }, 00:11:09.865 "auth": { 00:11:09.865 "state": "completed", 00:11:09.865 "digest": "sha384", 00:11:09.865 "dhgroup": "ffdhe6144" 00:11:09.865 } 00:11:09.865 } 00:11:09.865 ]' 00:11:09.865 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:09.865 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:09.865 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:09.865 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:09.865 13:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:09.865 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.865 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.865 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:10.144 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:01:NmRlYjlhNWIxYTMyZjBhNDUwMDBkYjI1MDk4MTlkMWXV+sth: --dhchap-ctrl-secret DHHC-1:02:YWQwMWM5ODdhYjZlZWI0NjZmYzllZTU4NzBmZmQ1ZGRiMWI5MmVjMTYyNDYzNGU5WMMtrw==: 00:11:10.710 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.710 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:11:10.710 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.710 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.710 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.710 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:10.710 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:10.710 13:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:11.276 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:11:11.276 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:11.276 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:11.276 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:11.276 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:11.276 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:11.276 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:11.276 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.276 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.276 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.276 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:11.276 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:11.533 00:11:11.533 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:11.533 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:11.533 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:11.791 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.791 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:11.791 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.791 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.791 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.791 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:11.791 { 00:11:11.791 "cntlid": 85, 00:11:11.791 "qid": 0, 00:11:11.791 "state": "enabled", 00:11:11.791 "thread": "nvmf_tgt_poll_group_000", 00:11:11.791 "listen_address": { 00:11:11.791 "trtype": "TCP", 00:11:11.791 "adrfam": "IPv4", 00:11:11.791 "traddr": "10.0.0.2", 00:11:11.791 "trsvcid": "4420" 00:11:11.791 }, 00:11:11.791 "peer_address": { 00:11:11.791 "trtype": "TCP", 00:11:11.791 "adrfam": "IPv4", 00:11:11.791 "traddr": "10.0.0.1", 00:11:11.791 "trsvcid": "56234" 00:11:11.791 }, 00:11:11.791 "auth": { 00:11:11.791 "state": "completed", 00:11:11.791 "digest": "sha384", 00:11:11.791 "dhgroup": "ffdhe6144" 00:11:11.791 } 00:11:11.791 } 00:11:11.791 ]' 00:11:11.791 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:11.791 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:11.791 13:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:12.049 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:12.049 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:12.049 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:12.049 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:12.049 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:12.307 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:02:MjE3MmY1ZmUzMGU5MGY2NWUxYzcxNGI5ZWNmNzA0ZmIzYzAwZjBiMzFiMzRjNmYznJnGZg==: --dhchap-ctrl-secret DHHC-1:01:YTE4YmM2YjBhZjFlY2Q5MWU2OGZmMjY0YjVkNWI1YTDZ19cj: 00:11:12.873 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:12.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:12.873 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:11:12.873 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.873 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.873 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.873 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:12.873 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:12.873 13:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:13.131 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:11:13.131 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:13.131 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:13.131 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:13.131 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:13.131 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:13.131 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key3 00:11:13.131 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.131 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.131 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.131 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:13.131 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:13.697 00:11:13.697 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:13.697 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.697 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:13.955 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.955 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.955 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.955 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.955 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.955 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:13.955 { 00:11:13.955 "cntlid": 87, 00:11:13.955 "qid": 0, 00:11:13.955 "state": "enabled", 00:11:13.955 "thread": "nvmf_tgt_poll_group_000", 00:11:13.955 "listen_address": { 00:11:13.955 "trtype": "TCP", 00:11:13.955 "adrfam": "IPv4", 00:11:13.955 "traddr": "10.0.0.2", 00:11:13.955 "trsvcid": "4420" 00:11:13.955 }, 00:11:13.955 "peer_address": { 00:11:13.955 "trtype": "TCP", 00:11:13.955 "adrfam": "IPv4", 00:11:13.955 "traddr": "10.0.0.1", 00:11:13.955 "trsvcid": "56256" 00:11:13.955 }, 00:11:13.955 "auth": { 00:11:13.955 "state": "completed", 00:11:13.955 "digest": "sha384", 00:11:13.955 "dhgroup": "ffdhe6144" 00:11:13.955 } 00:11:13.955 } 00:11:13.955 ]' 00:11:13.955 13:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:13.955 13:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:13.955 13:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:13.955 13:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:13.955 13:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:13.955 13:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.955 13:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.955 13:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.213 13:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:03:M2RlODczNzg4ZWU0MTA1OTgyMTNjZjA1ODk2ZTg4NGUyNjg1NGM4YjQ5MmEwOTZkMGYwYTZkN2E5YmE4MDUzMOYuNsI=: 00:11:15.148 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.148 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:11:15.148 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.148 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.148 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.148 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:15.148 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:15.148 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:15.148 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:15.148 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:11:15.148 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:15.148 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:15.148 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:15.148 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:15.148 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.148 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:15.148 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.148 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.148 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.148 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:15.148 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:15.712 00:11:15.970 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:15.970 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:15.970 13:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.970 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.970 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.970 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.970 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.970 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.970 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:15.970 { 00:11:15.970 "cntlid": 89, 00:11:15.970 "qid": 0, 00:11:15.970 "state": "enabled", 00:11:15.970 "thread": "nvmf_tgt_poll_group_000", 00:11:15.970 "listen_address": { 00:11:15.970 "trtype": "TCP", 00:11:15.970 "adrfam": "IPv4", 00:11:15.970 "traddr": "10.0.0.2", 00:11:15.970 "trsvcid": "4420" 00:11:15.970 }, 00:11:15.970 "peer_address": { 00:11:15.970 "trtype": "TCP", 00:11:15.970 "adrfam": "IPv4", 00:11:15.970 "traddr": "10.0.0.1", 00:11:15.970 "trsvcid": "56280" 00:11:15.970 }, 00:11:15.970 "auth": { 00:11:15.970 "state": "completed", 00:11:15.970 "digest": "sha384", 00:11:15.970 "dhgroup": "ffdhe8192" 00:11:15.970 } 00:11:15.970 } 00:11:15.970 ]' 00:11:15.970 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:16.228 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:16.228 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:16.228 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:16.229 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:16.229 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.229 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.229 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.486 13:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:00:Y2JhMjZjZmY2YmVjNGY4ZDdjM2VhMjE5OWM0ZmRkMTA2YTljYmQwNDRmOTM4ZDg3nZhzjA==: --dhchap-ctrl-secret DHHC-1:03:NGNiZDg5ZTkyYjhhYWUzYzc5NTU1OGMxMGU3OTVjMTg3YzZkM2IxZjc4MDQyMTUzNWQ4YTNjOTkwY2M4YTVjNYuaayI=: 00:11:17.052 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:17.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:17.052 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:11:17.052 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.052 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.052 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.052 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:17.052 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:17.052 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:17.333 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:11:17.334 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:17.334 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:17.334 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:17.334 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:17.334 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.334 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:17.334 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.334 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.334 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.334 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:17.334 13:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:18.268 00:11:18.268 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:18.268 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:18.268 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:18.268 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.268 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:18.268 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.268 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.268 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.268 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:18.268 { 00:11:18.268 "cntlid": 91, 00:11:18.268 "qid": 0, 00:11:18.268 "state": "enabled", 00:11:18.268 "thread": "nvmf_tgt_poll_group_000", 00:11:18.268 "listen_address": { 00:11:18.268 "trtype": "TCP", 00:11:18.268 "adrfam": "IPv4", 00:11:18.268 "traddr": "10.0.0.2", 00:11:18.268 "trsvcid": "4420" 00:11:18.268 }, 00:11:18.268 "peer_address": { 00:11:18.268 "trtype": "TCP", 00:11:18.268 "adrfam": "IPv4", 00:11:18.268 "traddr": "10.0.0.1", 00:11:18.268 "trsvcid": "56306" 00:11:18.268 }, 00:11:18.268 "auth": { 00:11:18.268 "state": "completed", 00:11:18.268 "digest": "sha384", 00:11:18.268 "dhgroup": "ffdhe8192" 00:11:18.268 } 00:11:18.268 } 00:11:18.268 ]' 00:11:18.268 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:18.268 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:18.526 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:18.526 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:18.527 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:18.527 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:18.527 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:18.527 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.784 13:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:01:NmRlYjlhNWIxYTMyZjBhNDUwMDBkYjI1MDk4MTlkMWXV+sth: --dhchap-ctrl-secret DHHC-1:02:YWQwMWM5ODdhYjZlZWI0NjZmYzllZTU4NzBmZmQ1ZGRiMWI5MmVjMTYyNDYzNGU5WMMtrw==: 00:11:19.350 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:19.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:19.350 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:11:19.350 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.350 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.350 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.350 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:19.350 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:19.350 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:19.608 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:11:19.608 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:19.608 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:19.608 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:19.608 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:19.608 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:19.608 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:19.608 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.608 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.608 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.608 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:19.608 13:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:20.175 00:11:20.175 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:20.175 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:20.175 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:20.433 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:20.433 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:20.433 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.433 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.433 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.433 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:20.433 { 00:11:20.433 "cntlid": 93, 00:11:20.433 "qid": 0, 00:11:20.433 "state": "enabled", 00:11:20.433 "thread": "nvmf_tgt_poll_group_000", 00:11:20.433 "listen_address": { 00:11:20.433 "trtype": "TCP", 00:11:20.433 "adrfam": "IPv4", 00:11:20.433 "traddr": "10.0.0.2", 00:11:20.433 "trsvcid": "4420" 00:11:20.433 }, 00:11:20.433 "peer_address": { 00:11:20.433 "trtype": "TCP", 00:11:20.433 "adrfam": "IPv4", 00:11:20.433 "traddr": "10.0.0.1", 00:11:20.433 "trsvcid": "34156" 00:11:20.433 }, 00:11:20.433 "auth": { 00:11:20.433 "state": "completed", 00:11:20.433 "digest": "sha384", 00:11:20.433 "dhgroup": "ffdhe8192" 00:11:20.433 } 00:11:20.433 } 00:11:20.433 ]' 00:11:20.433 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:20.433 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:20.434 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:20.434 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:20.434 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:20.434 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:20.434 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:20.434 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:20.692 13:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:02:MjE3MmY1ZmUzMGU5MGY2NWUxYzcxNGI5ZWNmNzA0ZmIzYzAwZjBiMzFiMzRjNmYznJnGZg==: --dhchap-ctrl-secret DHHC-1:01:YTE4YmM2YjBhZjFlY2Q5MWU2OGZmMjY0YjVkNWI1YTDZ19cj: 00:11:21.627 13:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:21.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:21.627 13:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:11:21.627 13:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.627 13:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.627 13:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.627 13:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:21.627 13:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:21.627 13:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:21.627 13:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:11:21.627 13:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:21.627 13:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:21.627 13:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:21.627 13:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:21.627 13:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:21.627 13:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key3 00:11:21.628 13:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.628 13:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.628 13:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.628 13:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:21.628 13:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:22.195 00:11:22.195 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:22.195 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.195 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:22.453 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:22.453 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:22.453 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.453 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.453 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.453 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:22.453 { 00:11:22.453 "cntlid": 95, 00:11:22.453 "qid": 0, 00:11:22.453 "state": "enabled", 00:11:22.453 "thread": "nvmf_tgt_poll_group_000", 00:11:22.453 "listen_address": { 00:11:22.453 "trtype": "TCP", 00:11:22.453 "adrfam": "IPv4", 00:11:22.453 "traddr": "10.0.0.2", 00:11:22.453 "trsvcid": "4420" 00:11:22.453 }, 00:11:22.453 "peer_address": { 00:11:22.453 "trtype": "TCP", 00:11:22.453 "adrfam": "IPv4", 00:11:22.453 "traddr": "10.0.0.1", 00:11:22.453 "trsvcid": "34176" 00:11:22.453 }, 00:11:22.453 "auth": { 00:11:22.453 "state": "completed", 00:11:22.453 "digest": "sha384", 00:11:22.453 "dhgroup": "ffdhe8192" 00:11:22.453 } 00:11:22.453 } 00:11:22.453 ]' 00:11:22.453 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:22.453 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:22.453 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:22.453 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:22.454 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:22.712 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:22.712 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:22.712 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.712 13:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:03:M2RlODczNzg4ZWU0MTA1OTgyMTNjZjA1ODk2ZTg4NGUyNjg1NGM4YjQ5MmEwOTZkMGYwYTZkN2E5YmE4MDUzMOYuNsI=: 00:11:23.647 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:23.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:23.647 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:11:23.647 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.647 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.647 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.647 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:23.647 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:23.647 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:23.647 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:23.647 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:23.647 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:11:23.647 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:23.647 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:23.647 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:23.647 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:23.647 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:23.647 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:23.647 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.647 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.647 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.647 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:23.647 13:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:23.906 00:11:24.165 13:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:24.165 13:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.165 13:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:24.424 13:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.424 13:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.424 13:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.424 13:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.424 13:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.424 13:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:24.424 { 00:11:24.424 "cntlid": 97, 00:11:24.424 "qid": 0, 00:11:24.424 "state": "enabled", 00:11:24.424 "thread": "nvmf_tgt_poll_group_000", 00:11:24.424 "listen_address": { 00:11:24.424 "trtype": "TCP", 00:11:24.424 "adrfam": "IPv4", 00:11:24.424 "traddr": "10.0.0.2", 00:11:24.424 "trsvcid": "4420" 00:11:24.424 }, 00:11:24.424 "peer_address": { 00:11:24.424 "trtype": "TCP", 00:11:24.424 "adrfam": "IPv4", 00:11:24.424 "traddr": "10.0.0.1", 00:11:24.424 "trsvcid": "34194" 00:11:24.424 }, 00:11:24.424 "auth": { 00:11:24.424 "state": "completed", 00:11:24.424 "digest": "sha512", 00:11:24.424 "dhgroup": "null" 00:11:24.424 } 00:11:24.424 } 00:11:24.424 ]' 00:11:24.424 13:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:24.424 13:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:24.424 13:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:24.424 13:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:24.424 13:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:24.424 13:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.424 13:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.424 13:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.683 13:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:00:Y2JhMjZjZmY2YmVjNGY4ZDdjM2VhMjE5OWM0ZmRkMTA2YTljYmQwNDRmOTM4ZDg3nZhzjA==: --dhchap-ctrl-secret DHHC-1:03:NGNiZDg5ZTkyYjhhYWUzYzc5NTU1OGMxMGU3OTVjMTg3YzZkM2IxZjc4MDQyMTUzNWQ4YTNjOTkwY2M4YTVjNYuaayI=: 00:11:25.251 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:25.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:25.251 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:11:25.251 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.251 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.251 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.251 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:25.251 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:25.251 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:25.510 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:11:25.510 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:25.510 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:25.510 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:25.510 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:25.510 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.510 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:25.510 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.510 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.510 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.510 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:25.510 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:25.769 00:11:25.769 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:25.769 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:25.769 13:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.028 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:26.028 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:26.028 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.028 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.028 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.028 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:26.028 { 00:11:26.028 "cntlid": 99, 00:11:26.028 "qid": 0, 00:11:26.028 "state": "enabled", 00:11:26.028 "thread": "nvmf_tgt_poll_group_000", 00:11:26.028 "listen_address": { 00:11:26.028 "trtype": "TCP", 00:11:26.028 "adrfam": "IPv4", 00:11:26.028 "traddr": "10.0.0.2", 00:11:26.028 "trsvcid": "4420" 00:11:26.028 }, 00:11:26.028 "peer_address": { 00:11:26.028 "trtype": "TCP", 00:11:26.028 "adrfam": "IPv4", 00:11:26.028 "traddr": "10.0.0.1", 00:11:26.028 "trsvcid": "34220" 00:11:26.028 }, 00:11:26.028 "auth": { 00:11:26.028 "state": "completed", 00:11:26.028 "digest": "sha512", 00:11:26.028 "dhgroup": "null" 00:11:26.028 } 00:11:26.028 } 00:11:26.028 ]' 00:11:26.028 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:26.286 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:26.286 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:26.286 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:26.286 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:26.286 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:26.286 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.286 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.545 13:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:01:NmRlYjlhNWIxYTMyZjBhNDUwMDBkYjI1MDk4MTlkMWXV+sth: --dhchap-ctrl-secret DHHC-1:02:YWQwMWM5ODdhYjZlZWI0NjZmYzllZTU4NzBmZmQ1ZGRiMWI5MmVjMTYyNDYzNGU5WMMtrw==: 00:11:27.112 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.112 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:11:27.112 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.112 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.112 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.112 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:27.112 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:27.112 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:27.370 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:11:27.370 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:27.370 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:27.370 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:27.370 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:27.370 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.370 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:27.370 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.370 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.370 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.370 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:27.370 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:27.629 00:11:27.888 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:27.888 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.888 13:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:28.147 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.147 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.147 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.147 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.147 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.147 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:28.147 { 00:11:28.147 "cntlid": 101, 00:11:28.147 "qid": 0, 00:11:28.147 "state": "enabled", 00:11:28.147 "thread": "nvmf_tgt_poll_group_000", 00:11:28.147 "listen_address": { 00:11:28.147 "trtype": "TCP", 00:11:28.147 "adrfam": "IPv4", 00:11:28.147 "traddr": "10.0.0.2", 00:11:28.147 "trsvcid": "4420" 00:11:28.147 }, 00:11:28.147 "peer_address": { 00:11:28.147 "trtype": "TCP", 00:11:28.147 "adrfam": "IPv4", 00:11:28.147 "traddr": "10.0.0.1", 00:11:28.147 "trsvcid": "34236" 00:11:28.147 }, 00:11:28.147 "auth": { 00:11:28.147 "state": "completed", 00:11:28.147 "digest": "sha512", 00:11:28.147 "dhgroup": "null" 00:11:28.147 } 00:11:28.147 } 00:11:28.147 ]' 00:11:28.147 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:28.147 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:28.147 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:28.147 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:28.148 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:28.148 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:28.148 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:28.148 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.407 13:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:02:MjE3MmY1ZmUzMGU5MGY2NWUxYzcxNGI5ZWNmNzA0ZmIzYzAwZjBiMzFiMzRjNmYznJnGZg==: --dhchap-ctrl-secret DHHC-1:01:YTE4YmM2YjBhZjFlY2Q5MWU2OGZmMjY0YjVkNWI1YTDZ19cj: 00:11:28.974 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.974 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:11:28.974 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.974 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.974 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.974 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:28.974 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:28.974 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:29.233 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:11:29.233 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:29.233 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:29.233 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:29.233 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:29.233 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.233 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key3 00:11:29.233 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.233 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.233 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.233 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:29.233 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:29.861 00:11:29.861 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:29.861 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:29.861 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.861 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.861 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.861 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.861 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.861 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.861 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:29.861 { 00:11:29.861 "cntlid": 103, 00:11:29.861 "qid": 0, 00:11:29.861 "state": "enabled", 00:11:29.861 "thread": "nvmf_tgt_poll_group_000", 00:11:29.861 "listen_address": { 00:11:29.861 "trtype": "TCP", 00:11:29.861 "adrfam": "IPv4", 00:11:29.861 "traddr": "10.0.0.2", 00:11:29.861 "trsvcid": "4420" 00:11:29.861 }, 00:11:29.861 "peer_address": { 00:11:29.861 "trtype": "TCP", 00:11:29.861 "adrfam": "IPv4", 00:11:29.861 "traddr": "10.0.0.1", 00:11:29.861 "trsvcid": "34254" 00:11:29.861 }, 00:11:29.861 "auth": { 00:11:29.861 "state": "completed", 00:11:29.861 "digest": "sha512", 00:11:29.861 "dhgroup": "null" 00:11:29.861 } 00:11:29.861 } 00:11:29.861 ]' 00:11:29.861 13:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:29.861 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:29.861 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:30.120 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:30.120 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:30.120 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:30.120 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:30.120 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.378 13:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:03:M2RlODczNzg4ZWU0MTA1OTgyMTNjZjA1ODk2ZTg4NGUyNjg1NGM4YjQ5MmEwOTZkMGYwYTZkN2E5YmE4MDUzMOYuNsI=: 00:11:30.944 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.944 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:11:30.944 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.944 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.944 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.944 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:30.944 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:30.944 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:30.945 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:31.203 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:11:31.203 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:31.203 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:31.203 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:31.203 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:31.203 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:31.203 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:31.203 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.203 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.204 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.204 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:31.204 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:31.770 00:11:31.770 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:31.770 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.770 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:31.770 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.770 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.770 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.770 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.770 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.770 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:31.770 { 00:11:31.770 "cntlid": 105, 00:11:31.770 "qid": 0, 00:11:31.770 "state": "enabled", 00:11:31.770 "thread": "nvmf_tgt_poll_group_000", 00:11:31.770 "listen_address": { 00:11:31.770 "trtype": "TCP", 00:11:31.770 "adrfam": "IPv4", 00:11:31.770 "traddr": "10.0.0.2", 00:11:31.770 "trsvcid": "4420" 00:11:31.770 }, 00:11:31.770 "peer_address": { 00:11:31.770 "trtype": "TCP", 00:11:31.770 "adrfam": "IPv4", 00:11:31.770 "traddr": "10.0.0.1", 00:11:31.770 "trsvcid": "58430" 00:11:31.770 }, 00:11:31.770 "auth": { 00:11:31.770 "state": "completed", 00:11:31.770 "digest": "sha512", 00:11:31.770 "dhgroup": "ffdhe2048" 00:11:31.770 } 00:11:31.770 } 00:11:31.770 ]' 00:11:31.770 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:32.029 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:32.029 13:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:32.029 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:32.029 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:32.029 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.029 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.029 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.288 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:00:Y2JhMjZjZmY2YmVjNGY4ZDdjM2VhMjE5OWM0ZmRkMTA2YTljYmQwNDRmOTM4ZDg3nZhzjA==: --dhchap-ctrl-secret DHHC-1:03:NGNiZDg5ZTkyYjhhYWUzYzc5NTU1OGMxMGU3OTVjMTg3YzZkM2IxZjc4MDQyMTUzNWQ4YTNjOTkwY2M4YTVjNYuaayI=: 00:11:32.855 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.855 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:11:32.855 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.855 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.855 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.855 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:32.855 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:32.855 13:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:33.114 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:11:33.114 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:33.114 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:33.114 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:33.114 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:33.114 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.114 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:33.114 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.114 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.114 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.114 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:33.114 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:33.372 00:11:33.372 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:33.372 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.372 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:33.631 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.631 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.632 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.632 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.632 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.632 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:33.632 { 00:11:33.632 "cntlid": 107, 00:11:33.632 "qid": 0, 00:11:33.632 "state": "enabled", 00:11:33.632 "thread": "nvmf_tgt_poll_group_000", 00:11:33.632 "listen_address": { 00:11:33.632 "trtype": "TCP", 00:11:33.632 "adrfam": "IPv4", 00:11:33.632 "traddr": "10.0.0.2", 00:11:33.632 "trsvcid": "4420" 00:11:33.632 }, 00:11:33.632 "peer_address": { 00:11:33.632 "trtype": "TCP", 00:11:33.632 "adrfam": "IPv4", 00:11:33.632 "traddr": "10.0.0.1", 00:11:33.632 "trsvcid": "58444" 00:11:33.632 }, 00:11:33.632 "auth": { 00:11:33.632 "state": "completed", 00:11:33.632 "digest": "sha512", 00:11:33.632 "dhgroup": "ffdhe2048" 00:11:33.632 } 00:11:33.632 } 00:11:33.632 ]' 00:11:33.632 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:33.632 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:33.632 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:33.632 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:33.632 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:33.891 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.891 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.891 13:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.891 13:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:01:NmRlYjlhNWIxYTMyZjBhNDUwMDBkYjI1MDk4MTlkMWXV+sth: --dhchap-ctrl-secret DHHC-1:02:YWQwMWM5ODdhYjZlZWI0NjZmYzllZTU4NzBmZmQ1ZGRiMWI5MmVjMTYyNDYzNGU5WMMtrw==: 00:11:34.825 13:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.825 13:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:11:34.825 13:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.825 13:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.825 13:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.825 13:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:34.825 13:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:34.825 13:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:34.825 13:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:11:34.825 13:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:34.825 13:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:34.825 13:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:34.825 13:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:34.825 13:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:34.825 13:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.825 13:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.825 13:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.825 13:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.825 13:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.825 13:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:35.393 00:11:35.393 13:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:35.393 13:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.393 13:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:35.393 13:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.393 13:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.393 13:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.393 13:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.393 13:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.393 13:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:35.393 { 00:11:35.393 "cntlid": 109, 00:11:35.393 "qid": 0, 00:11:35.393 "state": "enabled", 00:11:35.393 "thread": "nvmf_tgt_poll_group_000", 00:11:35.393 "listen_address": { 00:11:35.393 "trtype": "TCP", 00:11:35.393 "adrfam": "IPv4", 00:11:35.393 "traddr": "10.0.0.2", 00:11:35.393 "trsvcid": "4420" 00:11:35.393 }, 00:11:35.393 "peer_address": { 00:11:35.393 "trtype": "TCP", 00:11:35.393 "adrfam": "IPv4", 00:11:35.393 "traddr": "10.0.0.1", 00:11:35.393 "trsvcid": "58460" 00:11:35.393 }, 00:11:35.393 "auth": { 00:11:35.393 "state": "completed", 00:11:35.393 "digest": "sha512", 00:11:35.393 "dhgroup": "ffdhe2048" 00:11:35.393 } 00:11:35.393 } 00:11:35.393 ]' 00:11:35.393 13:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:35.652 13:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:35.652 13:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:35.652 13:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:35.652 13:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:35.652 13:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.652 13:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.652 13:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.911 13:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:02:MjE3MmY1ZmUzMGU5MGY2NWUxYzcxNGI5ZWNmNzA0ZmIzYzAwZjBiMzFiMzRjNmYznJnGZg==: --dhchap-ctrl-secret DHHC-1:01:YTE4YmM2YjBhZjFlY2Q5MWU2OGZmMjY0YjVkNWI1YTDZ19cj: 00:11:36.479 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:36.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:36.479 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:11:36.479 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.479 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.479 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.479 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:36.479 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:36.479 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:36.737 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:11:36.737 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:36.737 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:36.737 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:36.737 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:36.737 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.738 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key3 00:11:36.738 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.738 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.738 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.738 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:36.738 13:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:36.996 00:11:36.996 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:36.996 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:36.996 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.255 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.255 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.255 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.255 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.255 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.255 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:37.255 { 00:11:37.255 "cntlid": 111, 00:11:37.255 "qid": 0, 00:11:37.255 "state": "enabled", 00:11:37.255 "thread": "nvmf_tgt_poll_group_000", 00:11:37.255 "listen_address": { 00:11:37.255 "trtype": "TCP", 00:11:37.255 "adrfam": "IPv4", 00:11:37.255 "traddr": "10.0.0.2", 00:11:37.255 "trsvcid": "4420" 00:11:37.255 }, 00:11:37.255 "peer_address": { 00:11:37.255 "trtype": "TCP", 00:11:37.255 "adrfam": "IPv4", 00:11:37.255 "traddr": "10.0.0.1", 00:11:37.255 "trsvcid": "58492" 00:11:37.255 }, 00:11:37.255 "auth": { 00:11:37.255 "state": "completed", 00:11:37.255 "digest": "sha512", 00:11:37.255 "dhgroup": "ffdhe2048" 00:11:37.255 } 00:11:37.255 } 00:11:37.255 ]' 00:11:37.255 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:37.255 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:37.255 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:37.255 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:37.255 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:37.514 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.514 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.514 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.773 13:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:03:M2RlODczNzg4ZWU0MTA1OTgyMTNjZjA1ODk2ZTg4NGUyNjg1NGM4YjQ5MmEwOTZkMGYwYTZkN2E5YmE4MDUzMOYuNsI=: 00:11:38.340 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.340 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:11:38.340 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.340 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.340 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.340 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:38.340 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:38.340 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:38.340 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:38.599 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:11:38.599 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:38.599 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:38.599 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:38.599 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:38.599 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.599 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.599 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.599 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.599 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.599 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.599 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.856 00:11:38.856 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:38.856 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:38.856 13:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.123 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.123 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.123 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.123 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.123 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.123 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:39.123 { 00:11:39.123 "cntlid": 113, 00:11:39.123 "qid": 0, 00:11:39.123 "state": "enabled", 00:11:39.123 "thread": "nvmf_tgt_poll_group_000", 00:11:39.123 "listen_address": { 00:11:39.123 "trtype": "TCP", 00:11:39.123 "adrfam": "IPv4", 00:11:39.123 "traddr": "10.0.0.2", 00:11:39.123 "trsvcid": "4420" 00:11:39.123 }, 00:11:39.123 "peer_address": { 00:11:39.123 "trtype": "TCP", 00:11:39.123 "adrfam": "IPv4", 00:11:39.123 "traddr": "10.0.0.1", 00:11:39.123 "trsvcid": "58526" 00:11:39.123 }, 00:11:39.123 "auth": { 00:11:39.123 "state": "completed", 00:11:39.123 "digest": "sha512", 00:11:39.123 "dhgroup": "ffdhe3072" 00:11:39.123 } 00:11:39.123 } 00:11:39.123 ]' 00:11:39.123 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:39.123 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:39.123 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:39.123 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:39.123 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:39.123 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.123 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.123 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.416 13:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:00:Y2JhMjZjZmY2YmVjNGY4ZDdjM2VhMjE5OWM0ZmRkMTA2YTljYmQwNDRmOTM4ZDg3nZhzjA==: --dhchap-ctrl-secret DHHC-1:03:NGNiZDg5ZTkyYjhhYWUzYzc5NTU1OGMxMGU3OTVjMTg3YzZkM2IxZjc4MDQyMTUzNWQ4YTNjOTkwY2M4YTVjNYuaayI=: 00:11:40.366 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.366 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:11:40.366 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.366 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.366 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.366 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:40.366 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:40.366 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:40.366 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:11:40.366 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:40.366 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:40.366 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:40.366 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:40.366 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.366 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.366 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.366 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.625 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.625 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.625 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.883 00:11:40.883 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:40.883 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.883 13:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:41.141 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.141 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.141 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.141 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.141 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.141 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:41.141 { 00:11:41.141 "cntlid": 115, 00:11:41.141 "qid": 0, 00:11:41.141 "state": "enabled", 00:11:41.141 "thread": "nvmf_tgt_poll_group_000", 00:11:41.141 "listen_address": { 00:11:41.141 "trtype": "TCP", 00:11:41.141 "adrfam": "IPv4", 00:11:41.141 "traddr": "10.0.0.2", 00:11:41.141 "trsvcid": "4420" 00:11:41.141 }, 00:11:41.141 "peer_address": { 00:11:41.141 "trtype": "TCP", 00:11:41.141 "adrfam": "IPv4", 00:11:41.141 "traddr": "10.0.0.1", 00:11:41.141 "trsvcid": "50582" 00:11:41.141 }, 00:11:41.141 "auth": { 00:11:41.141 "state": "completed", 00:11:41.141 "digest": "sha512", 00:11:41.141 "dhgroup": "ffdhe3072" 00:11:41.141 } 00:11:41.141 } 00:11:41.141 ]' 00:11:41.141 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:41.141 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:41.141 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:41.141 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:41.141 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:41.141 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.141 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.141 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.401 13:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:01:NmRlYjlhNWIxYTMyZjBhNDUwMDBkYjI1MDk4MTlkMWXV+sth: --dhchap-ctrl-secret DHHC-1:02:YWQwMWM5ODdhYjZlZWI0NjZmYzllZTU4NzBmZmQ1ZGRiMWI5MmVjMTYyNDYzNGU5WMMtrw==: 00:11:42.337 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.337 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:11:42.337 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.337 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.337 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.337 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:42.337 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:42.337 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:42.596 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:11:42.596 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:42.596 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:42.596 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:42.596 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:42.596 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.596 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.596 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.596 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.596 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.596 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.596 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.854 00:11:42.854 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:42.854 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.854 13:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:43.113 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.113 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.113 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.113 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.113 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.113 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:43.113 { 00:11:43.113 "cntlid": 117, 00:11:43.113 "qid": 0, 00:11:43.113 "state": "enabled", 00:11:43.113 "thread": "nvmf_tgt_poll_group_000", 00:11:43.113 "listen_address": { 00:11:43.113 "trtype": "TCP", 00:11:43.113 "adrfam": "IPv4", 00:11:43.113 "traddr": "10.0.0.2", 00:11:43.113 "trsvcid": "4420" 00:11:43.113 }, 00:11:43.113 "peer_address": { 00:11:43.113 "trtype": "TCP", 00:11:43.113 "adrfam": "IPv4", 00:11:43.113 "traddr": "10.0.0.1", 00:11:43.113 "trsvcid": "50590" 00:11:43.113 }, 00:11:43.113 "auth": { 00:11:43.113 "state": "completed", 00:11:43.113 "digest": "sha512", 00:11:43.113 "dhgroup": "ffdhe3072" 00:11:43.114 } 00:11:43.114 } 00:11:43.114 ]' 00:11:43.114 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:43.114 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:43.114 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:43.373 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:43.373 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:43.373 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.373 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.373 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.632 13:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:02:MjE3MmY1ZmUzMGU5MGY2NWUxYzcxNGI5ZWNmNzA0ZmIzYzAwZjBiMzFiMzRjNmYznJnGZg==: --dhchap-ctrl-secret DHHC-1:01:YTE4YmM2YjBhZjFlY2Q5MWU2OGZmMjY0YjVkNWI1YTDZ19cj: 00:11:44.200 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.200 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:11:44.200 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.200 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.200 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.200 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:44.200 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:44.200 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:44.460 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:11:44.460 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:44.460 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:44.460 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:44.460 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:44.460 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.460 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key3 00:11:44.460 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.460 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.460 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.460 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:44.460 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:44.719 00:11:44.719 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:44.719 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.719 13:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:44.978 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.978 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.978 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.978 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.978 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.978 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:44.978 { 00:11:44.978 "cntlid": 119, 00:11:44.978 "qid": 0, 00:11:44.978 "state": "enabled", 00:11:44.978 "thread": "nvmf_tgt_poll_group_000", 00:11:44.978 "listen_address": { 00:11:44.978 "trtype": "TCP", 00:11:44.978 "adrfam": "IPv4", 00:11:44.978 "traddr": "10.0.0.2", 00:11:44.978 "trsvcid": "4420" 00:11:44.978 }, 00:11:44.978 "peer_address": { 00:11:44.978 "trtype": "TCP", 00:11:44.978 "adrfam": "IPv4", 00:11:44.978 "traddr": "10.0.0.1", 00:11:44.978 "trsvcid": "50626" 00:11:44.978 }, 00:11:44.978 "auth": { 00:11:44.978 "state": "completed", 00:11:44.978 "digest": "sha512", 00:11:44.978 "dhgroup": "ffdhe3072" 00:11:44.978 } 00:11:44.978 } 00:11:44.978 ]' 00:11:44.978 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:44.978 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:44.978 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:45.238 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:45.238 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:45.238 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.238 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.238 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.497 13:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:03:M2RlODczNzg4ZWU0MTA1OTgyMTNjZjA1ODk2ZTg4NGUyNjg1NGM4YjQ5MmEwOTZkMGYwYTZkN2E5YmE4MDUzMOYuNsI=: 00:11:46.064 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.064 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:11:46.064 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.064 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.064 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.064 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:46.064 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:46.064 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:46.064 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:46.325 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:11:46.325 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:46.325 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:46.325 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:46.325 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:46.325 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.325 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.325 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.325 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.325 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.325 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.325 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.583 00:11:46.583 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:46.584 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:46.584 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.842 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.842 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.842 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.842 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.842 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.842 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:46.842 { 00:11:46.842 "cntlid": 121, 00:11:46.842 "qid": 0, 00:11:46.842 "state": "enabled", 00:11:46.842 "thread": "nvmf_tgt_poll_group_000", 00:11:46.842 "listen_address": { 00:11:46.842 "trtype": "TCP", 00:11:46.842 "adrfam": "IPv4", 00:11:46.842 "traddr": "10.0.0.2", 00:11:46.842 "trsvcid": "4420" 00:11:46.842 }, 00:11:46.842 "peer_address": { 00:11:46.842 "trtype": "TCP", 00:11:46.842 "adrfam": "IPv4", 00:11:46.842 "traddr": "10.0.0.1", 00:11:46.842 "trsvcid": "50670" 00:11:46.842 }, 00:11:46.842 "auth": { 00:11:46.842 "state": "completed", 00:11:46.842 "digest": "sha512", 00:11:46.842 "dhgroup": "ffdhe4096" 00:11:46.842 } 00:11:46.842 } 00:11:46.842 ]' 00:11:46.842 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:46.842 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:46.842 13:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:47.101 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:47.101 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:47.101 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.101 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.101 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.360 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:00:Y2JhMjZjZmY2YmVjNGY4ZDdjM2VhMjE5OWM0ZmRkMTA2YTljYmQwNDRmOTM4ZDg3nZhzjA==: --dhchap-ctrl-secret DHHC-1:03:NGNiZDg5ZTkyYjhhYWUzYzc5NTU1OGMxMGU3OTVjMTg3YzZkM2IxZjc4MDQyMTUzNWQ4YTNjOTkwY2M4YTVjNYuaayI=: 00:11:47.928 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.928 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:11:47.928 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.928 13:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.928 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.928 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:47.928 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:47.928 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:48.186 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:11:48.186 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:48.186 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:48.186 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:48.186 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:48.186 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.186 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.186 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.186 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.186 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.186 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.186 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.445 00:11:48.445 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:48.445 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:48.445 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.704 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.704 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.704 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.704 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.704 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.704 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:48.704 { 00:11:48.704 "cntlid": 123, 00:11:48.704 "qid": 0, 00:11:48.704 "state": "enabled", 00:11:48.704 "thread": "nvmf_tgt_poll_group_000", 00:11:48.704 "listen_address": { 00:11:48.704 "trtype": "TCP", 00:11:48.704 "adrfam": "IPv4", 00:11:48.704 "traddr": "10.0.0.2", 00:11:48.704 "trsvcid": "4420" 00:11:48.704 }, 00:11:48.704 "peer_address": { 00:11:48.704 "trtype": "TCP", 00:11:48.704 "adrfam": "IPv4", 00:11:48.704 "traddr": "10.0.0.1", 00:11:48.704 "trsvcid": "50694" 00:11:48.704 }, 00:11:48.704 "auth": { 00:11:48.704 "state": "completed", 00:11:48.704 "digest": "sha512", 00:11:48.704 "dhgroup": "ffdhe4096" 00:11:48.704 } 00:11:48.704 } 00:11:48.704 ]' 00:11:48.704 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:48.963 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:48.963 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:48.963 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:48.963 13:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:48.963 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.963 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.963 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.247 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:01:NmRlYjlhNWIxYTMyZjBhNDUwMDBkYjI1MDk4MTlkMWXV+sth: --dhchap-ctrl-secret DHHC-1:02:YWQwMWM5ODdhYjZlZWI0NjZmYzllZTU4NzBmZmQ1ZGRiMWI5MmVjMTYyNDYzNGU5WMMtrw==: 00:11:49.839 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.839 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:11:49.839 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.839 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.839 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.839 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:49.839 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:49.839 13:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:50.097 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:11:50.097 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:50.097 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:50.097 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:50.097 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:50.097 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.097 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.097 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.097 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.097 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.097 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.097 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.355 00:11:50.355 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:50.355 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.355 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:50.613 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.613 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.613 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.613 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.613 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.613 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:50.613 { 00:11:50.613 "cntlid": 125, 00:11:50.613 "qid": 0, 00:11:50.613 "state": "enabled", 00:11:50.613 "thread": "nvmf_tgt_poll_group_000", 00:11:50.613 "listen_address": { 00:11:50.613 "trtype": "TCP", 00:11:50.613 "adrfam": "IPv4", 00:11:50.613 "traddr": "10.0.0.2", 00:11:50.613 "trsvcid": "4420" 00:11:50.613 }, 00:11:50.613 "peer_address": { 00:11:50.613 "trtype": "TCP", 00:11:50.613 "adrfam": "IPv4", 00:11:50.613 "traddr": "10.0.0.1", 00:11:50.613 "trsvcid": "48252" 00:11:50.613 }, 00:11:50.613 "auth": { 00:11:50.613 "state": "completed", 00:11:50.613 "digest": "sha512", 00:11:50.613 "dhgroup": "ffdhe4096" 00:11:50.613 } 00:11:50.613 } 00:11:50.613 ]' 00:11:50.613 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:50.871 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:50.871 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:50.871 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:50.871 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:50.871 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.871 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.871 13:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.130 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:02:MjE3MmY1ZmUzMGU5MGY2NWUxYzcxNGI5ZWNmNzA0ZmIzYzAwZjBiMzFiMzRjNmYznJnGZg==: --dhchap-ctrl-secret DHHC-1:01:YTE4YmM2YjBhZjFlY2Q5MWU2OGZmMjY0YjVkNWI1YTDZ19cj: 00:11:51.696 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.696 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:11:51.696 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.696 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.696 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.696 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:51.696 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:51.696 13:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:51.955 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:11:51.955 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:51.955 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:51.955 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:51.955 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:51.955 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.955 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key3 00:11:51.955 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.955 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.955 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.955 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:51.955 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:52.522 00:11:52.522 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:52.522 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:52.522 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.522 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.522 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.522 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.522 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.522 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.522 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:52.522 { 00:11:52.522 "cntlid": 127, 00:11:52.522 "qid": 0, 00:11:52.522 "state": "enabled", 00:11:52.522 "thread": "nvmf_tgt_poll_group_000", 00:11:52.522 "listen_address": { 00:11:52.522 "trtype": "TCP", 00:11:52.522 "adrfam": "IPv4", 00:11:52.522 "traddr": "10.0.0.2", 00:11:52.522 "trsvcid": "4420" 00:11:52.522 }, 00:11:52.522 "peer_address": { 00:11:52.522 "trtype": "TCP", 00:11:52.522 "adrfam": "IPv4", 00:11:52.522 "traddr": "10.0.0.1", 00:11:52.522 "trsvcid": "48272" 00:11:52.522 }, 00:11:52.522 "auth": { 00:11:52.522 "state": "completed", 00:11:52.522 "digest": "sha512", 00:11:52.522 "dhgroup": "ffdhe4096" 00:11:52.522 } 00:11:52.522 } 00:11:52.522 ]' 00:11:52.522 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:52.522 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:52.522 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:52.780 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:52.780 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:52.780 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.780 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.780 13:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.038 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:03:M2RlODczNzg4ZWU0MTA1OTgyMTNjZjA1ODk2ZTg4NGUyNjg1NGM4YjQ5MmEwOTZkMGYwYTZkN2E5YmE4MDUzMOYuNsI=: 00:11:53.605 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:53.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:53.605 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:11:53.605 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.605 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.605 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.605 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:53.605 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:53.605 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:53.605 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:53.863 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:11:53.863 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:53.863 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:53.863 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:53.863 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:53.863 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.863 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:53.863 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.863 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.863 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.863 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:53.863 13:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.429 00:11:54.429 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:54.429 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.429 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:54.687 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:54.687 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:54.687 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.687 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.687 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.687 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:54.687 { 00:11:54.687 "cntlid": 129, 00:11:54.687 "qid": 0, 00:11:54.687 "state": "enabled", 00:11:54.687 "thread": "nvmf_tgt_poll_group_000", 00:11:54.687 "listen_address": { 00:11:54.687 "trtype": "TCP", 00:11:54.687 "adrfam": "IPv4", 00:11:54.687 "traddr": "10.0.0.2", 00:11:54.687 "trsvcid": "4420" 00:11:54.687 }, 00:11:54.687 "peer_address": { 00:11:54.687 "trtype": "TCP", 00:11:54.687 "adrfam": "IPv4", 00:11:54.687 "traddr": "10.0.0.1", 00:11:54.687 "trsvcid": "48308" 00:11:54.687 }, 00:11:54.687 "auth": { 00:11:54.687 "state": "completed", 00:11:54.687 "digest": "sha512", 00:11:54.687 "dhgroup": "ffdhe6144" 00:11:54.687 } 00:11:54.687 } 00:11:54.687 ]' 00:11:54.687 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:54.687 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:54.687 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:54.687 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:54.687 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:54.687 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:54.687 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.687 13:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.945 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:00:Y2JhMjZjZmY2YmVjNGY4ZDdjM2VhMjE5OWM0ZmRkMTA2YTljYmQwNDRmOTM4ZDg3nZhzjA==: --dhchap-ctrl-secret DHHC-1:03:NGNiZDg5ZTkyYjhhYWUzYzc5NTU1OGMxMGU3OTVjMTg3YzZkM2IxZjc4MDQyMTUzNWQ4YTNjOTkwY2M4YTVjNYuaayI=: 00:11:55.880 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:55.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:55.880 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:11:55.880 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.880 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.880 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.880 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:55.880 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:55.880 13:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:55.880 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:11:55.880 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:55.880 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:55.880 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:55.880 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:55.880 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:55.880 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:55.880 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.880 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.880 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.880 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:55.880 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.447 00:11:56.447 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:56.447 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.447 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:56.705 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:56.705 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:56.705 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.705 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.705 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.705 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:56.705 { 00:11:56.705 "cntlid": 131, 00:11:56.705 "qid": 0, 00:11:56.705 "state": "enabled", 00:11:56.705 "thread": "nvmf_tgt_poll_group_000", 00:11:56.705 "listen_address": { 00:11:56.705 "trtype": "TCP", 00:11:56.705 "adrfam": "IPv4", 00:11:56.705 "traddr": "10.0.0.2", 00:11:56.705 "trsvcid": "4420" 00:11:56.705 }, 00:11:56.705 "peer_address": { 00:11:56.706 "trtype": "TCP", 00:11:56.706 "adrfam": "IPv4", 00:11:56.706 "traddr": "10.0.0.1", 00:11:56.706 "trsvcid": "48346" 00:11:56.706 }, 00:11:56.706 "auth": { 00:11:56.706 "state": "completed", 00:11:56.706 "digest": "sha512", 00:11:56.706 "dhgroup": "ffdhe6144" 00:11:56.706 } 00:11:56.706 } 00:11:56.706 ]' 00:11:56.706 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:56.706 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:56.706 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:56.706 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:56.706 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:56.706 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:56.706 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:56.706 13:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:56.964 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:01:NmRlYjlhNWIxYTMyZjBhNDUwMDBkYjI1MDk4MTlkMWXV+sth: --dhchap-ctrl-secret DHHC-1:02:YWQwMWM5ODdhYjZlZWI0NjZmYzllZTU4NzBmZmQ1ZGRiMWI5MmVjMTYyNDYzNGU5WMMtrw==: 00:11:57.907 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.907 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:11:57.907 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.907 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.907 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.907 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:57.907 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:57.907 13:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:57.907 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:11:57.907 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:57.907 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:57.907 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:57.907 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:57.907 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:57.907 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.907 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.907 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.907 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.907 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.907 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:58.478 00:11:58.478 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:58.478 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.478 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:58.478 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:58.478 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:58.478 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.478 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.478 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.478 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:58.478 { 00:11:58.478 "cntlid": 133, 00:11:58.478 "qid": 0, 00:11:58.478 "state": "enabled", 00:11:58.478 "thread": "nvmf_tgt_poll_group_000", 00:11:58.478 "listen_address": { 00:11:58.478 "trtype": "TCP", 00:11:58.478 "adrfam": "IPv4", 00:11:58.478 "traddr": "10.0.0.2", 00:11:58.478 "trsvcid": "4420" 00:11:58.478 }, 00:11:58.478 "peer_address": { 00:11:58.478 "trtype": "TCP", 00:11:58.478 "adrfam": "IPv4", 00:11:58.478 "traddr": "10.0.0.1", 00:11:58.478 "trsvcid": "48382" 00:11:58.478 }, 00:11:58.478 "auth": { 00:11:58.478 "state": "completed", 00:11:58.478 "digest": "sha512", 00:11:58.478 "dhgroup": "ffdhe6144" 00:11:58.478 } 00:11:58.478 } 00:11:58.478 ]' 00:11:58.739 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:58.739 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:58.739 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:58.739 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:58.739 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:58.739 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:58.739 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:58.739 13:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.998 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:02:MjE3MmY1ZmUzMGU5MGY2NWUxYzcxNGI5ZWNmNzA0ZmIzYzAwZjBiMzFiMzRjNmYznJnGZg==: --dhchap-ctrl-secret DHHC-1:01:YTE4YmM2YjBhZjFlY2Q5MWU2OGZmMjY0YjVkNWI1YTDZ19cj: 00:11:59.566 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.566 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:11:59.566 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.566 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.566 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.566 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:59.566 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:59.566 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:59.824 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:11:59.824 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:59.824 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:59.824 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:59.824 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:59.824 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.824 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key3 00:11:59.824 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.824 13:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.824 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.824 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:59.824 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:00.391 00:12:00.391 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:00.391 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:00.391 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:00.649 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.649 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.649 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.649 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.649 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.649 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:00.649 { 00:12:00.649 "cntlid": 135, 00:12:00.649 "qid": 0, 00:12:00.649 "state": "enabled", 00:12:00.649 "thread": "nvmf_tgt_poll_group_000", 00:12:00.649 "listen_address": { 00:12:00.649 "trtype": "TCP", 00:12:00.649 "adrfam": "IPv4", 00:12:00.649 "traddr": "10.0.0.2", 00:12:00.649 "trsvcid": "4420" 00:12:00.649 }, 00:12:00.649 "peer_address": { 00:12:00.649 "trtype": "TCP", 00:12:00.649 "adrfam": "IPv4", 00:12:00.649 "traddr": "10.0.0.1", 00:12:00.649 "trsvcid": "37446" 00:12:00.649 }, 00:12:00.649 "auth": { 00:12:00.649 "state": "completed", 00:12:00.649 "digest": "sha512", 00:12:00.649 "dhgroup": "ffdhe6144" 00:12:00.649 } 00:12:00.649 } 00:12:00.649 ]' 00:12:00.649 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:00.649 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:00.649 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:00.649 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:00.649 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:00.908 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.908 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.908 13:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.166 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:03:M2RlODczNzg4ZWU0MTA1OTgyMTNjZjA1ODk2ZTg4NGUyNjg1NGM4YjQ5MmEwOTZkMGYwYTZkN2E5YmE4MDUzMOYuNsI=: 00:12:01.732 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.732 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:12:01.732 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.732 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.732 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.732 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:01.732 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:01.732 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:01.732 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:01.989 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:12:01.989 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:01.989 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:01.989 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:01.989 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:01.989 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.989 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.989 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.989 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.989 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.990 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.990 13:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:02.556 00:12:02.556 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:02.556 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:02.556 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.556 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.556 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.556 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.556 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.815 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.815 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:02.815 { 00:12:02.815 "cntlid": 137, 00:12:02.815 "qid": 0, 00:12:02.815 "state": "enabled", 00:12:02.815 "thread": "nvmf_tgt_poll_group_000", 00:12:02.815 "listen_address": { 00:12:02.815 "trtype": "TCP", 00:12:02.815 "adrfam": "IPv4", 00:12:02.815 "traddr": "10.0.0.2", 00:12:02.815 "trsvcid": "4420" 00:12:02.815 }, 00:12:02.815 "peer_address": { 00:12:02.815 "trtype": "TCP", 00:12:02.815 "adrfam": "IPv4", 00:12:02.815 "traddr": "10.0.0.1", 00:12:02.815 "trsvcid": "37466" 00:12:02.815 }, 00:12:02.815 "auth": { 00:12:02.815 "state": "completed", 00:12:02.815 "digest": "sha512", 00:12:02.815 "dhgroup": "ffdhe8192" 00:12:02.815 } 00:12:02.815 } 00:12:02.815 ]' 00:12:02.815 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:02.815 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:02.815 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:02.815 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:02.815 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:02.815 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.815 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.815 13:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.074 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:00:Y2JhMjZjZmY2YmVjNGY4ZDdjM2VhMjE5OWM0ZmRkMTA2YTljYmQwNDRmOTM4ZDg3nZhzjA==: --dhchap-ctrl-secret DHHC-1:03:NGNiZDg5ZTkyYjhhYWUzYzc5NTU1OGMxMGU3OTVjMTg3YzZkM2IxZjc4MDQyMTUzNWQ4YTNjOTkwY2M4YTVjNYuaayI=: 00:12:03.642 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.642 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:12:03.642 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.642 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.642 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.642 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:03.642 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:03.642 13:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:03.900 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:12:03.900 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:03.900 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:03.900 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:03.900 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:03.900 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.900 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.900 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.901 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.901 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.901 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.901 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.848 00:12:04.848 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:04.848 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:04.848 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.848 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.848 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.848 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.848 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.848 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.848 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:04.848 { 00:12:04.848 "cntlid": 139, 00:12:04.848 "qid": 0, 00:12:04.848 "state": "enabled", 00:12:04.848 "thread": "nvmf_tgt_poll_group_000", 00:12:04.848 "listen_address": { 00:12:04.848 "trtype": "TCP", 00:12:04.848 "adrfam": "IPv4", 00:12:04.848 "traddr": "10.0.0.2", 00:12:04.848 "trsvcid": "4420" 00:12:04.848 }, 00:12:04.848 "peer_address": { 00:12:04.848 "trtype": "TCP", 00:12:04.848 "adrfam": "IPv4", 00:12:04.848 "traddr": "10.0.0.1", 00:12:04.848 "trsvcid": "37496" 00:12:04.848 }, 00:12:04.848 "auth": { 00:12:04.848 "state": "completed", 00:12:04.848 "digest": "sha512", 00:12:04.848 "dhgroup": "ffdhe8192" 00:12:04.848 } 00:12:04.848 } 00:12:04.848 ]' 00:12:04.848 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:04.848 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:04.848 13:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:04.848 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:04.848 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:05.146 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.146 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.146 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.146 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:01:NmRlYjlhNWIxYTMyZjBhNDUwMDBkYjI1MDk4MTlkMWXV+sth: --dhchap-ctrl-secret DHHC-1:02:YWQwMWM5ODdhYjZlZWI0NjZmYzllZTU4NzBmZmQ1ZGRiMWI5MmVjMTYyNDYzNGU5WMMtrw==: 00:12:06.081 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.081 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:12:06.081 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.081 13:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.081 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.081 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:06.081 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:06.081 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:06.340 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:12:06.340 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:06.340 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:06.340 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:06.340 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:06.340 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.340 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:06.340 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.340 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.340 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.340 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:06.340 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:06.908 00:12:06.908 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:06.908 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.908 13:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:07.167 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.167 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.167 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.167 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.167 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.167 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:07.167 { 00:12:07.167 "cntlid": 141, 00:12:07.167 "qid": 0, 00:12:07.167 "state": "enabled", 00:12:07.167 "thread": "nvmf_tgt_poll_group_000", 00:12:07.167 "listen_address": { 00:12:07.167 "trtype": "TCP", 00:12:07.167 "adrfam": "IPv4", 00:12:07.167 "traddr": "10.0.0.2", 00:12:07.167 "trsvcid": "4420" 00:12:07.167 }, 00:12:07.167 "peer_address": { 00:12:07.167 "trtype": "TCP", 00:12:07.167 "adrfam": "IPv4", 00:12:07.167 "traddr": "10.0.0.1", 00:12:07.167 "trsvcid": "37514" 00:12:07.167 }, 00:12:07.167 "auth": { 00:12:07.167 "state": "completed", 00:12:07.167 "digest": "sha512", 00:12:07.167 "dhgroup": "ffdhe8192" 00:12:07.167 } 00:12:07.167 } 00:12:07.167 ]' 00:12:07.167 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:07.167 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:07.168 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:07.168 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:07.168 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:07.168 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.168 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.168 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.426 13:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:02:MjE3MmY1ZmUzMGU5MGY2NWUxYzcxNGI5ZWNmNzA0ZmIzYzAwZjBiMzFiMzRjNmYznJnGZg==: --dhchap-ctrl-secret DHHC-1:01:YTE4YmM2YjBhZjFlY2Q5MWU2OGZmMjY0YjVkNWI1YTDZ19cj: 00:12:08.363 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.363 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:12:08.363 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.363 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.363 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.363 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:08.363 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:08.363 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:08.363 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:12:08.363 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:08.363 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:08.363 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:08.363 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:08.363 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.363 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key3 00:12:08.363 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.363 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.363 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.363 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:08.363 13:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:08.930 00:12:08.930 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:08.930 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:08.930 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.498 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.498 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.498 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.498 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.498 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.498 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:09.498 { 00:12:09.498 "cntlid": 143, 00:12:09.498 "qid": 0, 00:12:09.498 "state": "enabled", 00:12:09.498 "thread": "nvmf_tgt_poll_group_000", 00:12:09.498 "listen_address": { 00:12:09.498 "trtype": "TCP", 00:12:09.498 "adrfam": "IPv4", 00:12:09.498 "traddr": "10.0.0.2", 00:12:09.498 "trsvcid": "4420" 00:12:09.498 }, 00:12:09.498 "peer_address": { 00:12:09.498 "trtype": "TCP", 00:12:09.498 "adrfam": "IPv4", 00:12:09.498 "traddr": "10.0.0.1", 00:12:09.498 "trsvcid": "37538" 00:12:09.498 }, 00:12:09.498 "auth": { 00:12:09.498 "state": "completed", 00:12:09.498 "digest": "sha512", 00:12:09.498 "dhgroup": "ffdhe8192" 00:12:09.498 } 00:12:09.498 } 00:12:09.498 ]' 00:12:09.498 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:09.498 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:09.498 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:09.498 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:09.498 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:09.498 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.498 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.498 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.756 13:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:03:M2RlODczNzg4ZWU0MTA1OTgyMTNjZjA1ODk2ZTg4NGUyNjg1NGM4YjQ5MmEwOTZkMGYwYTZkN2E5YmE4MDUzMOYuNsI=: 00:12:10.321 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.321 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:12:10.321 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.321 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.321 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.579 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:12:10.579 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:12:10.579 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:12:10.579 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:10.579 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:10.579 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:10.837 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:12:10.837 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:10.837 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:10.837 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:10.837 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:10.837 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.837 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.837 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.837 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.838 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.838 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.838 13:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.403 00:12:11.403 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:11.403 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.403 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:11.661 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.661 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.661 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.661 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.661 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.661 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:11.661 { 00:12:11.661 "cntlid": 145, 00:12:11.661 "qid": 0, 00:12:11.661 "state": "enabled", 00:12:11.661 "thread": "nvmf_tgt_poll_group_000", 00:12:11.661 "listen_address": { 00:12:11.661 "trtype": "TCP", 00:12:11.661 "adrfam": "IPv4", 00:12:11.661 "traddr": "10.0.0.2", 00:12:11.661 "trsvcid": "4420" 00:12:11.661 }, 00:12:11.661 "peer_address": { 00:12:11.661 "trtype": "TCP", 00:12:11.661 "adrfam": "IPv4", 00:12:11.661 "traddr": "10.0.0.1", 00:12:11.661 "trsvcid": "42260" 00:12:11.661 }, 00:12:11.661 "auth": { 00:12:11.661 "state": "completed", 00:12:11.661 "digest": "sha512", 00:12:11.661 "dhgroup": "ffdhe8192" 00:12:11.661 } 00:12:11.661 } 00:12:11.661 ]' 00:12:11.661 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:11.661 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:11.661 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:11.661 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:11.661 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:11.919 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.919 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.919 13:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.919 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:00:Y2JhMjZjZmY2YmVjNGY4ZDdjM2VhMjE5OWM0ZmRkMTA2YTljYmQwNDRmOTM4ZDg3nZhzjA==: --dhchap-ctrl-secret DHHC-1:03:NGNiZDg5ZTkyYjhhYWUzYzc5NTU1OGMxMGU3OTVjMTg3YzZkM2IxZjc4MDQyMTUzNWQ4YTNjOTkwY2M4YTVjNYuaayI=: 00:12:12.854 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:12.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:12.854 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:12:12.854 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.854 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.854 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.854 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key1 00:12:12.854 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.854 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.855 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.855 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:12.855 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:12.855 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:12.855 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:12.855 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:12.855 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:12.855 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:12.855 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:12.855 13:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:13.421 request: 00:12:13.421 { 00:12:13.421 "name": "nvme0", 00:12:13.421 "trtype": "tcp", 00:12:13.421 "traddr": "10.0.0.2", 00:12:13.421 "adrfam": "ipv4", 00:12:13.421 "trsvcid": "4420", 00:12:13.421 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:13.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1", 00:12:13.421 "prchk_reftag": false, 00:12:13.421 "prchk_guard": false, 00:12:13.421 "hdgst": false, 00:12:13.421 "ddgst": false, 00:12:13.421 "dhchap_key": "key2", 00:12:13.421 "method": "bdev_nvme_attach_controller", 00:12:13.421 "req_id": 1 00:12:13.421 } 00:12:13.421 Got JSON-RPC error response 00:12:13.421 response: 00:12:13.421 { 00:12:13.421 "code": -5, 00:12:13.421 "message": "Input/output error" 00:12:13.421 } 00:12:13.421 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:13.421 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:13.421 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:13.421 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:13.421 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:12:13.421 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.421 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.421 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.421 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.421 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.421 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.421 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.421 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:13.421 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:13.421 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:13.421 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:13.421 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:13.421 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:13.421 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:13.421 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:13.421 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:13.989 request: 00:12:13.989 { 00:12:13.989 "name": "nvme0", 00:12:13.989 "trtype": "tcp", 00:12:13.989 "traddr": "10.0.0.2", 00:12:13.989 "adrfam": "ipv4", 00:12:13.989 "trsvcid": "4420", 00:12:13.989 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:13.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1", 00:12:13.989 "prchk_reftag": false, 00:12:13.989 "prchk_guard": false, 00:12:13.989 "hdgst": false, 00:12:13.989 "ddgst": false, 00:12:13.989 "dhchap_key": "key1", 00:12:13.989 "dhchap_ctrlr_key": "ckey2", 00:12:13.989 "method": "bdev_nvme_attach_controller", 00:12:13.989 "req_id": 1 00:12:13.989 } 00:12:13.989 Got JSON-RPC error response 00:12:13.989 response: 00:12:13.989 { 00:12:13.989 "code": -5, 00:12:13.989 "message": "Input/output error" 00:12:13.989 } 00:12:13.989 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:13.989 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:13.989 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:13.989 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:13.989 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:12:13.989 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.989 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.989 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.989 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key1 00:12:13.989 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.989 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.989 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.989 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.989 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:13.989 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.989 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:13.989 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:13.989 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:13.989 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:13.989 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.989 13:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.557 request: 00:12:14.557 { 00:12:14.557 "name": "nvme0", 00:12:14.557 "trtype": "tcp", 00:12:14.557 "traddr": "10.0.0.2", 00:12:14.557 "adrfam": "ipv4", 00:12:14.557 "trsvcid": "4420", 00:12:14.557 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:14.557 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1", 00:12:14.557 "prchk_reftag": false, 00:12:14.557 "prchk_guard": false, 00:12:14.557 "hdgst": false, 00:12:14.557 "ddgst": false, 00:12:14.557 "dhchap_key": "key1", 00:12:14.557 "dhchap_ctrlr_key": "ckey1", 00:12:14.557 "method": "bdev_nvme_attach_controller", 00:12:14.557 "req_id": 1 00:12:14.557 } 00:12:14.557 Got JSON-RPC error response 00:12:14.557 response: 00:12:14.557 { 00:12:14.557 "code": -5, 00:12:14.557 "message": "Input/output error" 00:12:14.557 } 00:12:14.557 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:14.557 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:14.557 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:14.557 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:14.557 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:12:14.557 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.557 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.557 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.557 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 68384 00:12:14.557 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 68384 ']' 00:12:14.557 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 68384 00:12:14.557 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:12:14.557 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:14.557 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68384 00:12:14.557 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:14.557 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:14.557 killing process with pid 68384 00:12:14.557 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68384' 00:12:14.557 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 68384 00:12:14.557 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 68384 00:12:14.557 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:12:14.557 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:14.557 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:14.557 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.815 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=71339 00:12:14.815 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 71339 00:12:14.815 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:12:14.815 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 71339 ']' 00:12:14.815 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.815 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:14.815 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.815 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:14.815 13:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.748 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:15.748 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:12:15.748 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:15.748 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:15.748 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.748 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.748 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:15.748 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 71339 00:12:15.748 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 71339 ']' 00:12:15.748 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.748 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:15.748 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.748 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:15.748 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.006 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:16.006 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:12:16.006 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:12:16.006 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.006 13:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.006 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.006 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:12:16.006 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:16.006 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:16.006 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:16.006 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:16.006 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.006 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key3 00:12:16.006 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.007 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.007 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.007 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:16.007 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:16.573 00:12:16.573 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:16.573 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:16.573 13:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.139 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.139 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.139 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.139 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.139 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.139 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:17.139 { 00:12:17.139 "cntlid": 1, 00:12:17.139 "qid": 0, 00:12:17.139 "state": "enabled", 00:12:17.139 "thread": "nvmf_tgt_poll_group_000", 00:12:17.139 "listen_address": { 00:12:17.139 "trtype": "TCP", 00:12:17.139 "adrfam": "IPv4", 00:12:17.139 "traddr": "10.0.0.2", 00:12:17.139 "trsvcid": "4420" 00:12:17.139 }, 00:12:17.139 "peer_address": { 00:12:17.139 "trtype": "TCP", 00:12:17.139 "adrfam": "IPv4", 00:12:17.139 "traddr": "10.0.0.1", 00:12:17.139 "trsvcid": "42342" 00:12:17.139 }, 00:12:17.139 "auth": { 00:12:17.139 "state": "completed", 00:12:17.139 "digest": "sha512", 00:12:17.139 "dhgroup": "ffdhe8192" 00:12:17.139 } 00:12:17.139 } 00:12:17.139 ]' 00:12:17.139 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:17.139 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:17.139 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:17.139 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:17.139 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:17.139 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.139 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.139 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.398 13:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid 0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-secret DHHC-1:03:M2RlODczNzg4ZWU0MTA1OTgyMTNjZjA1ODk2ZTg4NGUyNjg1NGM4YjQ5MmEwOTZkMGYwYTZkN2E5YmE4MDUzMOYuNsI=: 00:12:18.332 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.332 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:12:18.332 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.332 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.332 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.332 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --dhchap-key key3 00:12:18.332 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.332 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.332 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.332 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:12:18.332 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:12:18.332 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:18.332 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:18.332 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:18.332 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:18.332 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:18.332 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:18.332 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:18.332 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:18.332 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:18.591 request: 00:12:18.591 { 00:12:18.591 "name": "nvme0", 00:12:18.591 "trtype": "tcp", 00:12:18.591 "traddr": "10.0.0.2", 00:12:18.591 "adrfam": "ipv4", 00:12:18.591 "trsvcid": "4420", 00:12:18.591 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:18.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1", 00:12:18.591 "prchk_reftag": false, 00:12:18.591 "prchk_guard": false, 00:12:18.591 "hdgst": false, 00:12:18.591 "ddgst": false, 00:12:18.591 "dhchap_key": "key3", 00:12:18.591 "method": "bdev_nvme_attach_controller", 00:12:18.591 "req_id": 1 00:12:18.591 } 00:12:18.591 Got JSON-RPC error response 00:12:18.591 response: 00:12:18.591 { 00:12:18.591 "code": -5, 00:12:18.591 "message": "Input/output error" 00:12:18.591 } 00:12:18.591 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:18.591 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:18.591 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:18.591 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:18.591 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:12:18.591 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:12:18.591 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:18.591 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:18.850 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:18.850 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:18.850 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:18.850 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:18.850 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:18.850 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:18.850 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:18.850 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:18.850 13:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:19.108 request: 00:12:19.108 { 00:12:19.108 "name": "nvme0", 00:12:19.108 "trtype": "tcp", 00:12:19.108 "traddr": "10.0.0.2", 00:12:19.108 "adrfam": "ipv4", 00:12:19.108 "trsvcid": "4420", 00:12:19.108 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:19.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1", 00:12:19.108 "prchk_reftag": false, 00:12:19.108 "prchk_guard": false, 00:12:19.108 "hdgst": false, 00:12:19.108 "ddgst": false, 00:12:19.108 "dhchap_key": "key3", 00:12:19.108 "method": "bdev_nvme_attach_controller", 00:12:19.108 "req_id": 1 00:12:19.108 } 00:12:19.108 Got JSON-RPC error response 00:12:19.108 response: 00:12:19.108 { 00:12:19.108 "code": -5, 00:12:19.108 "message": "Input/output error" 00:12:19.108 } 00:12:19.108 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:19.108 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:19.108 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:19.108 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:19.108 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:12:19.108 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:12:19.108 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:12:19.108 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:19.108 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:19.108 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:19.367 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:12:19.367 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.367 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.367 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.367 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:12:19.367 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.367 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.367 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.367 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:19.367 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:19.367 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:19.367 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:19.367 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:19.367 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:19.367 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:19.368 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:19.368 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:19.625 request: 00:12:19.625 { 00:12:19.625 "name": "nvme0", 00:12:19.625 "trtype": "tcp", 00:12:19.625 "traddr": "10.0.0.2", 00:12:19.625 "adrfam": "ipv4", 00:12:19.625 "trsvcid": "4420", 00:12:19.625 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:19.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1", 00:12:19.625 "prchk_reftag": false, 00:12:19.625 "prchk_guard": false, 00:12:19.625 "hdgst": false, 00:12:19.625 "ddgst": false, 00:12:19.626 "dhchap_key": "key0", 00:12:19.626 "dhchap_ctrlr_key": "key1", 00:12:19.626 "method": "bdev_nvme_attach_controller", 00:12:19.626 "req_id": 1 00:12:19.626 } 00:12:19.626 Got JSON-RPC error response 00:12:19.626 response: 00:12:19.626 { 00:12:19.626 "code": -5, 00:12:19.626 "message": "Input/output error" 00:12:19.626 } 00:12:19.626 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:19.626 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:19.626 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:19.626 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:19.626 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:12:19.626 13:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:12:20.192 00:12:20.192 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:12:20.192 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:12:20.192 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.449 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.449 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.449 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.449 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:12:20.449 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:12:20.449 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 68416 00:12:20.449 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 68416 ']' 00:12:20.449 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 68416 00:12:20.449 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:12:20.706 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:20.706 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68416 00:12:20.706 killing process with pid 68416 00:12:20.706 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:20.706 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:20.706 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68416' 00:12:20.706 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 68416 00:12:20.706 13:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 68416 00:12:20.963 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:12:20.963 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:20.964 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:12:20.964 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:20.964 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:12:20.964 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:20.964 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:20.964 rmmod nvme_tcp 00:12:20.964 rmmod nvme_fabrics 00:12:20.964 rmmod nvme_keyring 00:12:21.221 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:21.221 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:12:21.221 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:12:21.221 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 71339 ']' 00:12:21.221 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 71339 00:12:21.221 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 71339 ']' 00:12:21.221 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 71339 00:12:21.221 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:12:21.221 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:21.221 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71339 00:12:21.221 killing process with pid 71339 00:12:21.221 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:21.221 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:21.221 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71339' 00:12:21.221 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 71339 00:12:21.221 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 71339 00:12:21.484 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:21.484 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:21.484 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:21.484 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:21.484 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:21.484 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.484 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:21.484 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.484 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:21.484 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.WWa /tmp/spdk.key-sha256.wQP /tmp/spdk.key-sha384.EOp /tmp/spdk.key-sha512.ZB0 /tmp/spdk.key-sha512.ToC /tmp/spdk.key-sha384.yPp /tmp/spdk.key-sha256.7DW '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:12:21.484 00:12:21.484 real 2m40.737s 00:12:21.484 user 6m24.482s 00:12:21.484 sys 0m25.540s 00:12:21.484 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.484 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.484 ************************************ 00:12:21.484 END TEST nvmf_auth_target 00:12:21.484 ************************************ 00:12:21.484 13:08:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:12:21.484 13:08:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:21.484 13:08:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:21.484 13:08:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.484 13:08:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:21.484 ************************************ 00:12:21.484 START TEST nvmf_bdevio_no_huge 00:12:21.484 ************************************ 00:12:21.484 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:21.484 * Looking for test storage... 00:12:21.484 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:21.484 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:21.484 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:12:21.484 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:21.484 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:21.484 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:21.484 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:21.484 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:21.484 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:21.484 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:21.485 Cannot find device "nvmf_tgt_br" 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:12:21.485 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:21.742 Cannot find device "nvmf_tgt_br2" 00:12:21.742 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:12:21.742 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:21.742 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:21.742 Cannot find device "nvmf_tgt_br" 00:12:21.742 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:12:21.742 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:21.742 Cannot find device "nvmf_tgt_br2" 00:12:21.742 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:12:21.742 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:21.742 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:21.742 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:21.742 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:21.742 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:12:21.742 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:21.742 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:21.742 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:12:21.742 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:21.742 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:21.742 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:21.742 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:21.742 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:21.742 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:21.742 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:21.742 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:21.742 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:21.742 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:21.742 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:21.742 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:21.742 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:21.742 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:21.742 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:21.742 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:21.742 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:21.742 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:21.742 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:21.742 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:22.000 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:22.000 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:22.000 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:22.000 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:22.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:22.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:12:22.000 00:12:22.000 --- 10.0.0.2 ping statistics --- 00:12:22.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.000 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:12:22.000 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:22.000 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:22.000 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:12:22.000 00:12:22.000 --- 10.0.0.3 ping statistics --- 00:12:22.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.000 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:12:22.000 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:22.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:22.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:12:22.000 00:12:22.000 --- 10.0.0.1 ping statistics --- 00:12:22.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.000 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:12:22.000 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:22.000 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:12:22.000 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:22.000 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:22.000 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:22.000 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:22.000 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:22.000 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:22.000 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:22.000 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:22.000 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:22.000 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:22.000 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:22.000 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=71653 00:12:22.000 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:12:22.000 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 71653 00:12:22.000 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 71653 ']' 00:12:22.000 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.000 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:22.000 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.000 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:22.000 13:08:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:22.000 [2024-07-25 13:08:14.040992] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:22.000 [2024-07-25 13:08:14.041099] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:12:22.000 [2024-07-25 13:08:14.182160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:22.258 [2024-07-25 13:08:14.338295] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.258 [2024-07-25 13:08:14.338357] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.258 [2024-07-25 13:08:14.338384] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:22.258 [2024-07-25 13:08:14.338394] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:22.258 [2024-07-25 13:08:14.338404] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.258 [2024-07-25 13:08:14.338710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:22.258 [2024-07-25 13:08:14.339150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:22.258 [2024-07-25 13:08:14.339500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:22.258 [2024-07-25 13:08:14.339505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:22.258 [2024-07-25 13:08:14.345905] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:23.192 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:23.192 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:12:23.192 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:23.192 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:23.192 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:23.192 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.192 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:23.192 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.192 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:23.192 [2024-07-25 13:08:15.057312] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:23.192 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.192 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:23.192 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.192 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:23.192 Malloc0 00:12:23.192 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.192 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:23.192 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.192 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:23.192 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.192 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:23.192 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.192 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:23.192 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.192 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.192 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.192 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:23.192 [2024-07-25 13:08:15.097632] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.192 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.192 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:12:23.192 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:23.192 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:12:23.192 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:12:23.192 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:23.192 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:23.192 { 00:12:23.192 "params": { 00:12:23.192 "name": "Nvme$subsystem", 00:12:23.192 "trtype": "$TEST_TRANSPORT", 00:12:23.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:23.192 "adrfam": "ipv4", 00:12:23.192 "trsvcid": "$NVMF_PORT", 00:12:23.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:23.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:23.192 "hdgst": ${hdgst:-false}, 00:12:23.192 "ddgst": ${ddgst:-false} 00:12:23.192 }, 00:12:23.192 "method": "bdev_nvme_attach_controller" 00:12:23.192 } 00:12:23.192 EOF 00:12:23.192 )") 00:12:23.193 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:12:23.193 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:12:23.193 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:12:23.193 13:08:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:23.193 "params": { 00:12:23.193 "name": "Nvme1", 00:12:23.193 "trtype": "tcp", 00:12:23.193 "traddr": "10.0.0.2", 00:12:23.193 "adrfam": "ipv4", 00:12:23.193 "trsvcid": "4420", 00:12:23.193 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:23.193 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:23.193 "hdgst": false, 00:12:23.193 "ddgst": false 00:12:23.193 }, 00:12:23.193 "method": "bdev_nvme_attach_controller" 00:12:23.193 }' 00:12:23.193 [2024-07-25 13:08:15.152536] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:23.193 [2024-07-25 13:08:15.152650] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71689 ] 00:12:23.193 [2024-07-25 13:08:15.315275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:23.450 [2024-07-25 13:08:15.437590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.450 [2024-07-25 13:08:15.437726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.450 [2024-07-25 13:08:15.437729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.450 [2024-07-25 13:08:15.450501] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:23.450 I/O targets: 00:12:23.450 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:23.450 00:12:23.450 00:12:23.450 CUnit - A unit testing framework for C - Version 2.1-3 00:12:23.450 http://cunit.sourceforge.net/ 00:12:23.450 00:12:23.450 00:12:23.450 Suite: bdevio tests on: Nvme1n1 00:12:23.450 Test: blockdev write read block ...passed 00:12:23.450 Test: blockdev write zeroes read block ...passed 00:12:23.450 Test: blockdev write zeroes read no split ...passed 00:12:23.450 Test: blockdev write zeroes read split ...passed 00:12:23.708 Test: blockdev write zeroes read split partial ...passed 00:12:23.708 Test: blockdev reset ...[2024-07-25 13:08:15.643080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:23.708 [2024-07-25 13:08:15.643177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ba870 (9): Bad file descriptor 00:12:23.708 [2024-07-25 13:08:15.660089] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:23.708 passed 00:12:23.708 Test: blockdev write read 8 blocks ...passed 00:12:23.708 Test: blockdev write read size > 128k ...passed 00:12:23.708 Test: blockdev write read invalid size ...passed 00:12:23.708 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:23.708 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:23.708 Test: blockdev write read max offset ...passed 00:12:23.708 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:23.708 Test: blockdev writev readv 8 blocks ...passed 00:12:23.708 Test: blockdev writev readv 30 x 1block ...passed 00:12:23.708 Test: blockdev writev readv block ...passed 00:12:23.708 Test: blockdev writev readv size > 128k ...passed 00:12:23.708 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:23.708 Test: blockdev comparev and writev ...[2024-07-25 13:08:15.668555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:23.708 [2024-07-25 13:08:15.668602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:23.708 [2024-07-25 13:08:15.668629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:23.708 [2024-07-25 13:08:15.668644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:23.708 [2024-07-25 13:08:15.669055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:23.708 [2024-07-25 13:08:15.669090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:23.708 [2024-07-25 13:08:15.669114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:23.708 [2024-07-25 13:08:15.669128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:23.708 [2024-07-25 13:08:15.669535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:23.709 [2024-07-25 13:08:15.669570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:23.709 [2024-07-25 13:08:15.669593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:23.709 [2024-07-25 13:08:15.669606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:23.709 [2024-07-25 13:08:15.670110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:23.709 [2024-07-25 13:08:15.670144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:23.709 [2024-07-25 13:08:15.670167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:23.709 [2024-07-25 13:08:15.670181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:23.709 passed 00:12:23.709 Test: blockdev nvme passthru rw ...passed 00:12:23.709 Test: blockdev nvme passthru vendor specific ...[2024-07-25 13:08:15.671146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:23.709 [2024-07-25 13:08:15.671175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:23.709 [2024-07-25 13:08:15.671295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:23.709 [2024-07-25 13:08:15.671327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:23.709 [2024-07-25 13:08:15.671435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:23.709 [2024-07-25 13:08:15.671464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:23.709 [2024-07-25 13:08:15.671579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:23.709 [2024-07-25 13:08:15.671607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:23.709 passed 00:12:23.709 Test: blockdev nvme admin passthru ...passed 00:12:23.709 Test: blockdev copy ...passed 00:12:23.709 00:12:23.709 Run Summary: Type Total Ran Passed Failed Inactive 00:12:23.709 suites 1 1 n/a 0 0 00:12:23.709 tests 23 23 23 0 0 00:12:23.709 asserts 152 152 152 0 n/a 00:12:23.709 00:12:23.709 Elapsed time = 0.164 seconds 00:12:23.968 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:23.968 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.968 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:23.968 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.968 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:23.968 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:12:23.968 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:23.968 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:12:23.968 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:23.968 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:12:23.968 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:23.968 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:23.968 rmmod nvme_tcp 00:12:23.968 rmmod nvme_fabrics 00:12:23.968 rmmod nvme_keyring 00:12:23.968 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:23.968 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:12:23.968 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:12:23.968 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 71653 ']' 00:12:23.968 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 71653 00:12:23.968 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 71653 ']' 00:12:23.968 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 71653 00:12:23.968 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:12:23.968 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:23.968 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71653 00:12:24.226 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:12:24.226 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:12:24.226 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71653' 00:12:24.226 killing process with pid 71653 00:12:24.226 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 71653 00:12:24.226 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 71653 00:12:24.483 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:24.483 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:24.483 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:24.483 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:24.483 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:24.483 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.483 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:24.483 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.483 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:24.483 00:12:24.483 real 0m3.094s 00:12:24.483 user 0m10.000s 00:12:24.483 sys 0m1.308s 00:12:24.484 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:24.484 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:24.484 ************************************ 00:12:24.484 END TEST nvmf_bdevio_no_huge 00:12:24.484 ************************************ 00:12:24.484 13:08:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:24.484 13:08:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:24.484 13:08:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:24.484 13:08:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:24.484 ************************************ 00:12:24.484 START TEST nvmf_tls 00:12:24.484 ************************************ 00:12:24.484 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:24.741 * Looking for test storage... 00:12:24.741 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:24.741 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:24.741 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:24.742 Cannot find device "nvmf_tgt_br" 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # true 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:24.742 Cannot find device "nvmf_tgt_br2" 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # true 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:24.742 Cannot find device "nvmf_tgt_br" 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # true 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:24.742 Cannot find device "nvmf_tgt_br2" 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # true 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:24.742 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:24.742 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:24.742 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:24.743 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:24.743 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:25.000 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:25.000 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:25.000 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:25.000 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:25.000 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:25.000 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:25.000 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:25.000 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:25.000 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:25.000 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:25.000 13:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:25.000 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:25.000 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:25.000 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:25.000 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:25.000 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:25.000 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:25.000 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:25.000 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:25.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:25.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:12:25.000 00:12:25.000 --- 10.0.0.2 ping statistics --- 00:12:25.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.000 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:12:25.000 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:25.000 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:25.000 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:12:25.000 00:12:25.000 --- 10.0.0.3 ping statistics --- 00:12:25.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.000 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:12:25.001 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:25.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:25.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:12:25.001 00:12:25.001 --- 10.0.0.1 ping statistics --- 00:12:25.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.001 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:12:25.001 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:25.001 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:12:25.001 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:25.001 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:25.001 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:25.001 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:25.001 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:25.001 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:25.001 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:25.001 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:12:25.001 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:25.001 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:25.001 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:25.001 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=71869 00:12:25.001 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:12:25.001 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 71869 00:12:25.001 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71869 ']' 00:12:25.001 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.001 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:25.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.001 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.001 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:25.001 13:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:25.001 [2024-07-25 13:08:17.142613] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:25.001 [2024-07-25 13:08:17.142754] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.258 [2024-07-25 13:08:17.286742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.258 [2024-07-25 13:08:17.393656] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.258 [2024-07-25 13:08:17.393738] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.258 [2024-07-25 13:08:17.393765] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.258 [2024-07-25 13:08:17.393777] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.258 [2024-07-25 13:08:17.393786] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.258 [2024-07-25 13:08:17.393827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.190 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:26.190 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:12:26.190 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:26.190 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:26.190 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:26.190 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.190 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:12:26.190 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:12:26.190 true 00:12:26.190 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:26.190 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:12:26.447 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:12:26.447 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:12:26.447 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:26.704 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:26.704 13:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:12:26.962 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:12:26.962 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:12:26.962 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:12:27.220 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:27.220 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:12:27.478 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:12:27.478 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:12:27.478 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:27.478 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:12:27.478 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:12:27.478 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:12:27.478 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:12:27.737 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:27.737 13:08:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:12:27.995 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:12:27.995 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:12:27.995 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:12:28.253 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:28.253 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:12:28.512 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:12:28.512 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:12:28.512 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:12:28.512 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:12:28.512 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:12:28.512 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:12:28.512 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:12:28.512 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:12:28.512 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:12:28.512 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:28.512 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:12:28.512 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:12:28.512 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:12:28.512 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:12:28.512 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:12:28.512 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:12:28.512 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:12:28.512 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:28.512 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:12:28.512 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.6Z2diUBkvM 00:12:28.512 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:12:28.512 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.6NCvKllKbh 00:12:28.512 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:28.512 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:28.512 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.6Z2diUBkvM 00:12:28.512 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.6NCvKllKbh 00:12:28.512 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:28.771 13:08:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:12:29.029 [2024-07-25 13:08:21.126470] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:29.029 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.6Z2diUBkvM 00:12:29.029 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.6Z2diUBkvM 00:12:29.029 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:29.288 [2024-07-25 13:08:21.430918] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:29.288 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:29.547 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:29.806 [2024-07-25 13:08:21.874994] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:29.806 [2024-07-25 13:08:21.875216] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.806 13:08:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:30.065 malloc0 00:12:30.065 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:30.324 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6Z2diUBkvM 00:12:30.583 [2024-07-25 13:08:22.558551] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:12:30.583 13:08:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.6Z2diUBkvM 00:12:42.788 Initializing NVMe Controllers 00:12:42.788 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:42.788 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:42.788 Initialization complete. Launching workers. 00:12:42.788 ======================================================== 00:12:42.788 Latency(us) 00:12:42.788 Device Information : IOPS MiB/s Average min max 00:12:42.788 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10152.05 39.66 6305.84 1487.43 13228.98 00:12:42.788 ======================================================== 00:12:42.788 Total : 10152.05 39.66 6305.84 1487.43 13228.98 00:12:42.788 00:12:42.788 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6Z2diUBkvM 00:12:42.788 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:42.788 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:42.788 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:42.788 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.6Z2diUBkvM' 00:12:42.788 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:42.788 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72102 00:12:42.788 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:42.788 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:42.789 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72102 /var/tmp/bdevperf.sock 00:12:42.789 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72102 ']' 00:12:42.789 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:42.789 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:42.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:42.789 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:42.789 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:42.789 13:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:42.789 [2024-07-25 13:08:32.841593] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:42.789 [2024-07-25 13:08:32.841678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72102 ] 00:12:42.789 [2024-07-25 13:08:32.980500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.789 [2024-07-25 13:08:33.101265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:42.789 [2024-07-25 13:08:33.164940] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:42.789 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:42.789 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:12:42.789 13:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6Z2diUBkvM 00:12:42.789 [2024-07-25 13:08:34.061731] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:42.789 [2024-07-25 13:08:34.061908] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:42.789 TLSTESTn1 00:12:42.789 13:08:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:42.789 Running I/O for 10 seconds... 00:12:52.778 00:12:52.778 Latency(us) 00:12:52.778 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:52.778 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:52.778 Verification LBA range: start 0x0 length 0x2000 00:12:52.778 TLSTESTn1 : 10.02 3815.65 14.90 0.00 0.00 33484.69 6642.97 26571.87 00:12:52.778 =================================================================================================================== 00:12:52.778 Total : 3815.65 14.90 0.00 0.00 33484.69 6642.97 26571.87 00:12:52.778 0 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 72102 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72102 ']' 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72102 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72102 00:12:52.778 killing process with pid 72102 00:12:52.778 Received shutdown signal, test time was about 10.000000 seconds 00:12:52.778 00:12:52.778 Latency(us) 00:12:52.778 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:52.778 =================================================================================================================== 00:12:52.778 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72102' 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72102 00:12:52.778 [2024-07-25 13:08:44.346722] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72102 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6NCvKllKbh 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6NCvKllKbh 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6NCvKllKbh 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.6NCvKllKbh' 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72230 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72230 /var/tmp/bdevperf.sock 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72230 ']' 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:52.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:52.778 13:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:52.778 [2024-07-25 13:08:44.612283] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:52.778 [2024-07-25 13:08:44.612375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72230 ] 00:12:52.778 [2024-07-25 13:08:44.749179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.778 [2024-07-25 13:08:44.841908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.778 [2024-07-25 13:08:44.896398] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:53.346 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:53.346 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:12:53.346 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6NCvKllKbh 00:12:53.604 [2024-07-25 13:08:45.754042] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:53.604 [2024-07-25 13:08:45.754214] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:53.604 [2024-07-25 13:08:45.759066] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:53.604 [2024-07-25 13:08:45.759616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf31f0 (107): Transport endpoint is not connected 00:12:53.604 [2024-07-25 13:08:45.760600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf31f0 (9): Bad file descriptor 00:12:53.604 [2024-07-25 13:08:45.761597] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:53.604 [2024-07-25 13:08:45.761635] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:53.604 [2024-07-25 13:08:45.761665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:53.604 request: 00:12:53.604 { 00:12:53.604 "name": "TLSTEST", 00:12:53.604 "trtype": "tcp", 00:12:53.604 "traddr": "10.0.0.2", 00:12:53.604 "adrfam": "ipv4", 00:12:53.604 "trsvcid": "4420", 00:12:53.604 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:53.604 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:53.604 "prchk_reftag": false, 00:12:53.604 "prchk_guard": false, 00:12:53.604 "hdgst": false, 00:12:53.604 "ddgst": false, 00:12:53.604 "psk": "/tmp/tmp.6NCvKllKbh", 00:12:53.604 "method": "bdev_nvme_attach_controller", 00:12:53.604 "req_id": 1 00:12:53.604 } 00:12:53.604 Got JSON-RPC error response 00:12:53.604 response: 00:12:53.604 { 00:12:53.604 "code": -5, 00:12:53.604 "message": "Input/output error" 00:12:53.604 } 00:12:53.604 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72230 00:12:53.604 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72230 ']' 00:12:53.604 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72230 00:12:53.604 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:12:53.604 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:53.604 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72230 00:12:53.864 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:12:53.864 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:12:53.864 killing process with pid 72230 00:12:53.864 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72230' 00:12:53.864 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72230 00:12:53.864 Received shutdown signal, test time was about 10.000000 seconds 00:12:53.864 00:12:53.864 Latency(us) 00:12:53.864 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.864 =================================================================================================================== 00:12:53.864 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:53.864 [2024-07-25 13:08:45.797749] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:12:53.864 13:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72230 00:12:53.864 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:12:53.864 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:12:53.864 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:53.864 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:53.864 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:53.864 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.6Z2diUBkvM 00:12:53.864 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:12:53.864 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.6Z2diUBkvM 00:12:53.864 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:53.864 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:53.864 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:53.864 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:53.864 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.6Z2diUBkvM 00:12:53.864 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:53.864 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:53.864 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:12:53.864 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.6Z2diUBkvM' 00:12:53.864 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:53.864 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72262 00:12:53.864 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:53.864 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:53.864 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72262 /var/tmp/bdevperf.sock 00:12:53.864 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72262 ']' 00:12:53.864 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:53.864 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:53.864 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:53.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:53.864 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:53.864 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:54.123 [2024-07-25 13:08:46.064463] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:54.123 [2024-07-25 13:08:46.064549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72262 ] 00:12:54.123 [2024-07-25 13:08:46.201265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.123 [2024-07-25 13:08:46.282970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.381 [2024-07-25 13:08:46.336203] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:54.949 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:54.949 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:12:54.949 13:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.6Z2diUBkvM 00:12:54.949 [2024-07-25 13:08:47.134456] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:54.949 [2024-07-25 13:08:47.134601] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:55.208 [2024-07-25 13:08:47.146494] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:55.208 [2024-07-25 13:08:47.146556] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:55.208 [2024-07-25 13:08:47.146669] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:55.208 [2024-07-25 13:08:47.147083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a1f0 (107): Transport endpoint is not connected 00:12:55.208 [2024-07-25 13:08:47.148073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80a1f0 (9): Bad file descriptor 00:12:55.208 [2024-07-25 13:08:47.149069] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:55.208 [2024-07-25 13:08:47.149110] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:55.208 [2024-07-25 13:08:47.149123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:55.208 request: 00:12:55.208 { 00:12:55.208 "name": "TLSTEST", 00:12:55.208 "trtype": "tcp", 00:12:55.208 "traddr": "10.0.0.2", 00:12:55.208 "adrfam": "ipv4", 00:12:55.208 "trsvcid": "4420", 00:12:55.208 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:55.208 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:12:55.208 "prchk_reftag": false, 00:12:55.208 "prchk_guard": false, 00:12:55.208 "hdgst": false, 00:12:55.208 "ddgst": false, 00:12:55.208 "psk": "/tmp/tmp.6Z2diUBkvM", 00:12:55.208 "method": "bdev_nvme_attach_controller", 00:12:55.208 "req_id": 1 00:12:55.208 } 00:12:55.208 Got JSON-RPC error response 00:12:55.208 response: 00:12:55.208 { 00:12:55.208 "code": -5, 00:12:55.208 "message": "Input/output error" 00:12:55.208 } 00:12:55.208 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72262 00:12:55.208 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72262 ']' 00:12:55.208 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72262 00:12:55.208 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:12:55.208 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:55.208 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72262 00:12:55.208 killing process with pid 72262 00:12:55.208 Received shutdown signal, test time was about 10.000000 seconds 00:12:55.208 00:12:55.208 Latency(us) 00:12:55.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.208 =================================================================================================================== 00:12:55.208 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:55.208 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:12:55.208 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:12:55.208 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72262' 00:12:55.208 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72262 00:12:55.208 [2024-07-25 13:08:47.198648] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:12:55.208 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72262 00:12:55.480 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:12:55.480 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:12:55.480 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:55.480 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:55.480 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:55.480 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.6Z2diUBkvM 00:12:55.480 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:12:55.480 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.6Z2diUBkvM 00:12:55.480 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:55.480 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:55.480 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:55.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:55.480 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:55.480 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.6Z2diUBkvM 00:12:55.480 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:55.480 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:12:55.480 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:55.480 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.6Z2diUBkvM' 00:12:55.480 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:55.480 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72285 00:12:55.480 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:55.480 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72285 /var/tmp/bdevperf.sock 00:12:55.480 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72285 ']' 00:12:55.480 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:55.480 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:55.480 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:55.480 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:55.480 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:55.480 13:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:55.480 [2024-07-25 13:08:47.456464] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:55.480 [2024-07-25 13:08:47.456574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72285 ] 00:12:55.480 [2024-07-25 13:08:47.589341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.779 [2024-07-25 13:08:47.685479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:55.779 [2024-07-25 13:08:47.736387] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:56.345 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:56.345 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:12:56.345 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6Z2diUBkvM 00:12:56.603 [2024-07-25 13:08:48.641857] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:56.603 [2024-07-25 13:08:48.642010] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:56.603 [2024-07-25 13:08:48.646808] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:12:56.603 [2024-07-25 13:08:48.646889] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:12:56.603 [2024-07-25 13:08:48.646940] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:56.603 [2024-07-25 13:08:48.647562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb61f0 (107): Transport endpoint is not connected 00:12:56.603 [2024-07-25 13:08:48.648550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb61f0 (9): Bad file descriptor 00:12:56.603 [2024-07-25 13:08:48.649546] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:12:56.603 [2024-07-25 13:08:48.649584] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:56.603 [2024-07-25 13:08:48.649614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:12:56.603 request: 00:12:56.603 { 00:12:56.603 "name": "TLSTEST", 00:12:56.604 "trtype": "tcp", 00:12:56.604 "traddr": "10.0.0.2", 00:12:56.604 "adrfam": "ipv4", 00:12:56.604 "trsvcid": "4420", 00:12:56.604 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:12:56.604 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:56.604 "prchk_reftag": false, 00:12:56.604 "prchk_guard": false, 00:12:56.604 "hdgst": false, 00:12:56.604 "ddgst": false, 00:12:56.604 "psk": "/tmp/tmp.6Z2diUBkvM", 00:12:56.604 "method": "bdev_nvme_attach_controller", 00:12:56.604 "req_id": 1 00:12:56.604 } 00:12:56.604 Got JSON-RPC error response 00:12:56.604 response: 00:12:56.604 { 00:12:56.604 "code": -5, 00:12:56.604 "message": "Input/output error" 00:12:56.604 } 00:12:56.604 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72285 00:12:56.604 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72285 ']' 00:12:56.604 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72285 00:12:56.604 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:12:56.604 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:56.604 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72285 00:12:56.604 killing process with pid 72285 00:12:56.604 Received shutdown signal, test time was about 10.000000 seconds 00:12:56.604 00:12:56.604 Latency(us) 00:12:56.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:56.604 =================================================================================================================== 00:12:56.604 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:56.604 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:12:56.604 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:12:56.604 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72285' 00:12:56.604 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72285 00:12:56.604 [2024-07-25 13:08:48.691197] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:12:56.604 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72285 00:12:56.862 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:12:56.862 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:12:56.862 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:56.862 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:56.862 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:56.862 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:56.862 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:12:56.862 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:56.862 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:56.862 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:56.862 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:56.862 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:56.862 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:56.862 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:56.862 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:56.862 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:56.862 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:12:56.862 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:56.862 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72307 00:12:56.862 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:56.862 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:56.862 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72307 /var/tmp/bdevperf.sock 00:12:56.862 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72307 ']' 00:12:56.862 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:56.862 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:56.862 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:56.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:56.862 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:56.862 13:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:56.862 [2024-07-25 13:08:48.943247] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:56.862 [2024-07-25 13:08:48.943352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72307 ] 00:12:57.120 [2024-07-25 13:08:49.073180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.120 [2024-07-25 13:08:49.155402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.120 [2024-07-25 13:08:49.209220] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:57.686 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:57.686 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:12:57.686 13:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:12:57.945 [2024-07-25 13:08:50.066649] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:57.945 [2024-07-25 13:08:50.068532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1610c00 (9): Bad file descriptor 00:12:57.945 [2024-07-25 13:08:50.069528] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:57.945 [2024-07-25 13:08:50.069568] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:57.945 [2024-07-25 13:08:50.069582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:57.945 request: 00:12:57.945 { 00:12:57.945 "name": "TLSTEST", 00:12:57.945 "trtype": "tcp", 00:12:57.945 "traddr": "10.0.0.2", 00:12:57.945 "adrfam": "ipv4", 00:12:57.945 "trsvcid": "4420", 00:12:57.945 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:57.945 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:57.945 "prchk_reftag": false, 00:12:57.945 "prchk_guard": false, 00:12:57.945 "hdgst": false, 00:12:57.945 "ddgst": false, 00:12:57.945 "method": "bdev_nvme_attach_controller", 00:12:57.945 "req_id": 1 00:12:57.945 } 00:12:57.945 Got JSON-RPC error response 00:12:57.945 response: 00:12:57.945 { 00:12:57.945 "code": -5, 00:12:57.945 "message": "Input/output error" 00:12:57.945 } 00:12:57.945 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72307 00:12:57.945 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72307 ']' 00:12:57.945 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72307 00:12:57.945 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:12:57.945 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:57.945 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72307 00:12:57.945 killing process with pid 72307 00:12:57.945 Received shutdown signal, test time was about 10.000000 seconds 00:12:57.945 00:12:57.945 Latency(us) 00:12:57.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:57.945 =================================================================================================================== 00:12:57.945 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:57.945 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:12:57.945 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:12:57.945 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72307' 00:12:57.945 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72307 00:12:57.945 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72307 00:12:58.204 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:12:58.204 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:12:58.204 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:58.204 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:58.204 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:58.204 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 71869 00:12:58.204 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71869 ']' 00:12:58.204 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71869 00:12:58.204 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:12:58.204 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:58.204 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71869 00:12:58.204 killing process with pid 71869 00:12:58.204 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:58.204 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:58.204 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71869' 00:12:58.204 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71869 00:12:58.204 [2024-07-25 13:08:50.359017] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:12:58.204 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71869 00:12:58.463 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:12:58.463 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:12:58.463 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:12:58.463 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:12:58.463 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:12:58.463 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:12:58.463 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:12:58.463 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:58.722 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:12:58.722 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.jq1FUXD1rm 00:12:58.722 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:58.722 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.jq1FUXD1rm 00:12:58.722 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:12:58.722 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:58.722 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:58.722 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:58.722 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72351 00:12:58.722 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72351 00:12:58.722 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72351 ']' 00:12:58.722 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.722 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:58.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.722 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:58.722 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.722 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:58.722 13:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:58.722 [2024-07-25 13:08:50.723982] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:58.722 [2024-07-25 13:08:50.724079] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.722 [2024-07-25 13:08:50.862597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.981 [2024-07-25 13:08:50.969823] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.981 [2024-07-25 13:08:50.969906] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.981 [2024-07-25 13:08:50.969918] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:58.981 [2024-07-25 13:08:50.969927] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:58.981 [2024-07-25 13:08:50.969934] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.981 [2024-07-25 13:08:50.969966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.981 [2024-07-25 13:08:51.031623] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:59.548 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:59.548 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:12:59.548 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:59.548 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:59.548 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:59.548 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.549 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.jq1FUXD1rm 00:12:59.549 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.jq1FUXD1rm 00:12:59.549 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:59.807 [2024-07-25 13:08:51.967941] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:59.807 13:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:00.065 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:00.324 [2024-07-25 13:08:52.432132] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:00.324 [2024-07-25 13:08:52.432415] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.324 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:00.583 malloc0 00:13:00.583 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:00.842 13:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jq1FUXD1rm 00:13:01.101 [2024-07-25 13:08:53.157172] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:01.101 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jq1FUXD1rm 00:13:01.101 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:01.101 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:01.101 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:01.101 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.jq1FUXD1rm' 00:13:01.101 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:01.101 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:01.101 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72406 00:13:01.101 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:01.101 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72406 /var/tmp/bdevperf.sock 00:13:01.101 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72406 ']' 00:13:01.101 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:01.101 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:01.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:01.101 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:01.101 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:01.101 13:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:01.101 [2024-07-25 13:08:53.221321] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:01.101 [2024-07-25 13:08:53.221425] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72406 ] 00:13:01.359 [2024-07-25 13:08:53.356433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.359 [2024-07-25 13:08:53.472309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:01.359 [2024-07-25 13:08:53.535166] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:02.295 13:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:02.295 13:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:02.295 13:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jq1FUXD1rm 00:13:02.295 [2024-07-25 13:08:54.370407] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:02.295 [2024-07-25 13:08:54.370537] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:02.295 TLSTESTn1 00:13:02.295 13:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:02.554 Running I/O for 10 seconds... 00:13:12.528 00:13:12.529 Latency(us) 00:13:12.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.529 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:12.529 Verification LBA range: start 0x0 length 0x2000 00:13:12.529 TLSTESTn1 : 10.02 4020.53 15.71 0.00 0.00 31768.40 7506.85 37415.10 00:13:12.529 =================================================================================================================== 00:13:12.529 Total : 4020.53 15.71 0.00 0.00 31768.40 7506.85 37415.10 00:13:12.529 0 00:13:12.529 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:12.529 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 72406 00:13:12.529 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72406 ']' 00:13:12.529 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72406 00:13:12.529 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:12.529 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:12.529 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72406 00:13:12.529 killing process with pid 72406 00:13:12.529 Received shutdown signal, test time was about 10.000000 seconds 00:13:12.529 00:13:12.529 Latency(us) 00:13:12.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.529 =================================================================================================================== 00:13:12.529 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:12.529 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:12.529 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:12.529 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72406' 00:13:12.529 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72406 00:13:12.529 [2024-07-25 13:09:04.652645] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:12.529 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72406 00:13:12.788 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.jq1FUXD1rm 00:13:12.788 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jq1FUXD1rm 00:13:12.788 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:12.788 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jq1FUXD1rm 00:13:12.788 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:12.788 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:12.788 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:12.788 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:12.788 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jq1FUXD1rm 00:13:12.788 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:12.788 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:12.788 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:12.788 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.jq1FUXD1rm' 00:13:12.788 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:12.788 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72535 00:13:12.788 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:12.788 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:12.788 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72535 /var/tmp/bdevperf.sock 00:13:12.788 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72535 ']' 00:13:12.788 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:12.788 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:12.788 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:12.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:12.788 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:12.788 13:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:13.047 [2024-07-25 13:09:04.992611] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:13.047 [2024-07-25 13:09:04.993384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72535 ] 00:13:13.047 [2024-07-25 13:09:05.128313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.047 [2024-07-25 13:09:05.212509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:13.306 [2024-07-25 13:09:05.265221] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:13.874 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:13.874 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:13.874 13:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jq1FUXD1rm 00:13:14.133 [2024-07-25 13:09:06.132217] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:14.133 [2024-07-25 13:09:06.132312] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:13:14.133 [2024-07-25 13:09:06.132324] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.jq1FUXD1rm 00:13:14.133 request: 00:13:14.133 { 00:13:14.133 "name": "TLSTEST", 00:13:14.133 "trtype": "tcp", 00:13:14.133 "traddr": "10.0.0.2", 00:13:14.133 "adrfam": "ipv4", 00:13:14.133 "trsvcid": "4420", 00:13:14.133 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:14.133 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:14.133 "prchk_reftag": false, 00:13:14.133 "prchk_guard": false, 00:13:14.133 "hdgst": false, 00:13:14.133 "ddgst": false, 00:13:14.133 "psk": "/tmp/tmp.jq1FUXD1rm", 00:13:14.133 "method": "bdev_nvme_attach_controller", 00:13:14.133 "req_id": 1 00:13:14.133 } 00:13:14.133 Got JSON-RPC error response 00:13:14.133 response: 00:13:14.133 { 00:13:14.133 "code": -1, 00:13:14.133 "message": "Operation not permitted" 00:13:14.133 } 00:13:14.133 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72535 00:13:14.133 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72535 ']' 00:13:14.133 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72535 00:13:14.133 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:14.133 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:14.133 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72535 00:13:14.133 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:14.133 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:14.133 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72535' 00:13:14.133 killing process with pid 72535 00:13:14.133 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72535 00:13:14.134 Received shutdown signal, test time was about 10.000000 seconds 00:13:14.134 00:13:14.134 Latency(us) 00:13:14.134 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:14.134 =================================================================================================================== 00:13:14.134 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:14.134 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72535 00:13:14.393 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:14.393 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:14.393 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:14.393 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:14.393 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:14.393 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 72351 00:13:14.393 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72351 ']' 00:13:14.393 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72351 00:13:14.393 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:14.393 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:14.393 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72351 00:13:14.393 killing process with pid 72351 00:13:14.393 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:14.393 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:14.393 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72351' 00:13:14.393 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72351 00:13:14.393 [2024-07-25 13:09:06.415652] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:14.393 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72351 00:13:14.652 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:13:14.652 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:14.652 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:14.652 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:14.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.652 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72573 00:13:14.652 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:14.652 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72573 00:13:14.652 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72573 ']' 00:13:14.652 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.652 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:14.652 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.652 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:14.652 13:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:14.652 [2024-07-25 13:09:06.714343] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:14.652 [2024-07-25 13:09:06.714664] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.911 [2024-07-25 13:09:06.848920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.911 [2024-07-25 13:09:06.949230] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.911 [2024-07-25 13:09:06.949587] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.911 [2024-07-25 13:09:06.949773] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:14.911 [2024-07-25 13:09:06.950036] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:14.911 [2024-07-25 13:09:06.950080] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.911 [2024-07-25 13:09:06.950142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.911 [2024-07-25 13:09:07.007956] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:15.847 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:15.847 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:15.847 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:15.847 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:15.847 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:15.847 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.847 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.jq1FUXD1rm 00:13:15.847 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:15.847 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.jq1FUXD1rm 00:13:15.847 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:13:15.847 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:15.847 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:13:15.847 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:15.847 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.jq1FUXD1rm 00:13:15.847 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.jq1FUXD1rm 00:13:15.847 13:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:15.847 [2024-07-25 13:09:08.001845] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:15.847 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:16.106 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:16.365 [2024-07-25 13:09:08.514005] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:16.365 [2024-07-25 13:09:08.514282] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.365 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:16.624 malloc0 00:13:16.624 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:16.882 13:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jq1FUXD1rm 00:13:17.141 [2024-07-25 13:09:09.204788] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:13:17.141 [2024-07-25 13:09:09.204849] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:13:17.141 [2024-07-25 13:09:09.204904] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:13:17.141 request: 00:13:17.141 { 00:13:17.141 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:17.141 "host": "nqn.2016-06.io.spdk:host1", 00:13:17.141 "psk": "/tmp/tmp.jq1FUXD1rm", 00:13:17.141 "method": "nvmf_subsystem_add_host", 00:13:17.141 "req_id": 1 00:13:17.141 } 00:13:17.141 Got JSON-RPC error response 00:13:17.141 response: 00:13:17.141 { 00:13:17.141 "code": -32603, 00:13:17.141 "message": "Internal error" 00:13:17.141 } 00:13:17.141 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:17.141 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:17.141 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:17.141 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:17.141 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 72573 00:13:17.141 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72573 ']' 00:13:17.141 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72573 00:13:17.141 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:17.141 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:17.141 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72573 00:13:17.141 killing process with pid 72573 00:13:17.141 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:17.141 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:17.141 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72573' 00:13:17.141 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72573 00:13:17.141 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72573 00:13:17.399 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.jq1FUXD1rm 00:13:17.399 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:13:17.399 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:17.399 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:17.399 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:17.399 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72636 00:13:17.399 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:17.399 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72636 00:13:17.399 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72636 ']' 00:13:17.399 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.399 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:17.400 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.400 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:17.400 13:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:17.400 [2024-07-25 13:09:09.515520] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:17.400 [2024-07-25 13:09:09.515598] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.658 [2024-07-25 13:09:09.649977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.658 [2024-07-25 13:09:09.742998] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.658 [2024-07-25 13:09:09.743051] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.658 [2024-07-25 13:09:09.743080] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:17.658 [2024-07-25 13:09:09.743089] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:17.658 [2024-07-25 13:09:09.743096] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.658 [2024-07-25 13:09:09.743123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.658 [2024-07-25 13:09:09.795890] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:18.595 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:18.595 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:18.595 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:18.595 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:18.595 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:18.595 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:18.595 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.jq1FUXD1rm 00:13:18.595 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.jq1FUXD1rm 00:13:18.595 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:18.596 [2024-07-25 13:09:10.723625] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:18.596 13:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:18.854 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:19.113 [2024-07-25 13:09:11.259773] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:19.113 [2024-07-25 13:09:11.260047] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.113 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:19.371 malloc0 00:13:19.371 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:19.630 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jq1FUXD1rm 00:13:19.889 [2024-07-25 13:09:11.891147] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:19.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:19.889 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=72685 00:13:19.889 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:19.889 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:19.889 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 72685 /var/tmp/bdevperf.sock 00:13:19.889 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72685 ']' 00:13:19.889 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:19.889 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:19.889 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:19.889 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:19.889 13:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:19.889 [2024-07-25 13:09:11.949812] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:19.889 [2024-07-25 13:09:11.950068] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72685 ] 00:13:20.148 [2024-07-25 13:09:12.082016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.148 [2024-07-25 13:09:12.165419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.148 [2024-07-25 13:09:12.221106] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:20.148 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:20.148 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:20.148 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jq1FUXD1rm 00:13:20.407 [2024-07-25 13:09:12.512018] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:20.407 [2024-07-25 13:09:12.512297] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:20.407 TLSTESTn1 00:13:20.665 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:20.926 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:13:20.926 "subsystems": [ 00:13:20.926 { 00:13:20.926 "subsystem": "keyring", 00:13:20.926 "config": [] 00:13:20.926 }, 00:13:20.926 { 00:13:20.926 "subsystem": "iobuf", 00:13:20.926 "config": [ 00:13:20.926 { 00:13:20.926 "method": "iobuf_set_options", 00:13:20.926 "params": { 00:13:20.926 "small_pool_count": 8192, 00:13:20.926 "large_pool_count": 1024, 00:13:20.926 "small_bufsize": 8192, 00:13:20.926 "large_bufsize": 135168 00:13:20.926 } 00:13:20.926 } 00:13:20.926 ] 00:13:20.926 }, 00:13:20.926 { 00:13:20.926 "subsystem": "sock", 00:13:20.926 "config": [ 00:13:20.926 { 00:13:20.926 "method": "sock_set_default_impl", 00:13:20.926 "params": { 00:13:20.926 "impl_name": "uring" 00:13:20.926 } 00:13:20.926 }, 00:13:20.926 { 00:13:20.926 "method": "sock_impl_set_options", 00:13:20.926 "params": { 00:13:20.926 "impl_name": "ssl", 00:13:20.926 "recv_buf_size": 4096, 00:13:20.926 "send_buf_size": 4096, 00:13:20.926 "enable_recv_pipe": true, 00:13:20.926 "enable_quickack": false, 00:13:20.926 "enable_placement_id": 0, 00:13:20.926 "enable_zerocopy_send_server": true, 00:13:20.926 "enable_zerocopy_send_client": false, 00:13:20.926 "zerocopy_threshold": 0, 00:13:20.926 "tls_version": 0, 00:13:20.926 "enable_ktls": false 00:13:20.926 } 00:13:20.926 }, 00:13:20.926 { 00:13:20.926 "method": "sock_impl_set_options", 00:13:20.926 "params": { 00:13:20.926 "impl_name": "posix", 00:13:20.926 "recv_buf_size": 2097152, 00:13:20.926 "send_buf_size": 2097152, 00:13:20.926 "enable_recv_pipe": true, 00:13:20.926 "enable_quickack": false, 00:13:20.926 "enable_placement_id": 0, 00:13:20.926 "enable_zerocopy_send_server": true, 00:13:20.926 "enable_zerocopy_send_client": false, 00:13:20.926 "zerocopy_threshold": 0, 00:13:20.926 "tls_version": 0, 00:13:20.926 "enable_ktls": false 00:13:20.926 } 00:13:20.926 }, 00:13:20.926 { 00:13:20.926 "method": "sock_impl_set_options", 00:13:20.926 "params": { 00:13:20.926 "impl_name": "uring", 00:13:20.926 "recv_buf_size": 2097152, 00:13:20.926 "send_buf_size": 2097152, 00:13:20.926 "enable_recv_pipe": true, 00:13:20.926 "enable_quickack": false, 00:13:20.926 "enable_placement_id": 0, 00:13:20.926 "enable_zerocopy_send_server": false, 00:13:20.926 "enable_zerocopy_send_client": false, 00:13:20.926 "zerocopy_threshold": 0, 00:13:20.927 "tls_version": 0, 00:13:20.927 "enable_ktls": false 00:13:20.927 } 00:13:20.927 } 00:13:20.927 ] 00:13:20.927 }, 00:13:20.927 { 00:13:20.927 "subsystem": "vmd", 00:13:20.927 "config": [] 00:13:20.927 }, 00:13:20.927 { 00:13:20.927 "subsystem": "accel", 00:13:20.927 "config": [ 00:13:20.927 { 00:13:20.927 "method": "accel_set_options", 00:13:20.927 "params": { 00:13:20.927 "small_cache_size": 128, 00:13:20.927 "large_cache_size": 16, 00:13:20.927 "task_count": 2048, 00:13:20.927 "sequence_count": 2048, 00:13:20.927 "buf_count": 2048 00:13:20.927 } 00:13:20.927 } 00:13:20.927 ] 00:13:20.927 }, 00:13:20.927 { 00:13:20.927 "subsystem": "bdev", 00:13:20.927 "config": [ 00:13:20.927 { 00:13:20.927 "method": "bdev_set_options", 00:13:20.927 "params": { 00:13:20.927 "bdev_io_pool_size": 65535, 00:13:20.927 "bdev_io_cache_size": 256, 00:13:20.927 "bdev_auto_examine": true, 00:13:20.927 "iobuf_small_cache_size": 128, 00:13:20.927 "iobuf_large_cache_size": 16 00:13:20.927 } 00:13:20.927 }, 00:13:20.927 { 00:13:20.927 "method": "bdev_raid_set_options", 00:13:20.927 "params": { 00:13:20.927 "process_window_size_kb": 1024, 00:13:20.927 "process_max_bandwidth_mb_sec": 0 00:13:20.927 } 00:13:20.927 }, 00:13:20.927 { 00:13:20.927 "method": "bdev_iscsi_set_options", 00:13:20.927 "params": { 00:13:20.927 "timeout_sec": 30 00:13:20.927 } 00:13:20.927 }, 00:13:20.927 { 00:13:20.927 "method": "bdev_nvme_set_options", 00:13:20.927 "params": { 00:13:20.927 "action_on_timeout": "none", 00:13:20.927 "timeout_us": 0, 00:13:20.927 "timeout_admin_us": 0, 00:13:20.927 "keep_alive_timeout_ms": 10000, 00:13:20.927 "arbitration_burst": 0, 00:13:20.927 "low_priority_weight": 0, 00:13:20.927 "medium_priority_weight": 0, 00:13:20.927 "high_priority_weight": 0, 00:13:20.927 "nvme_adminq_poll_period_us": 10000, 00:13:20.927 "nvme_ioq_poll_period_us": 0, 00:13:20.927 "io_queue_requests": 0, 00:13:20.927 "delay_cmd_submit": true, 00:13:20.927 "transport_retry_count": 4, 00:13:20.927 "bdev_retry_count": 3, 00:13:20.927 "transport_ack_timeout": 0, 00:13:20.927 "ctrlr_loss_timeout_sec": 0, 00:13:20.927 "reconnect_delay_sec": 0, 00:13:20.927 "fast_io_fail_timeout_sec": 0, 00:13:20.927 "disable_auto_failback": false, 00:13:20.927 "generate_uuids": false, 00:13:20.927 "transport_tos": 0, 00:13:20.927 "nvme_error_stat": false, 00:13:20.927 "rdma_srq_size": 0, 00:13:20.927 "io_path_stat": false, 00:13:20.927 "allow_accel_sequence": false, 00:13:20.927 "rdma_max_cq_size": 0, 00:13:20.927 "rdma_cm_event_timeout_ms": 0, 00:13:20.927 "dhchap_digests": [ 00:13:20.927 "sha256", 00:13:20.927 "sha384", 00:13:20.927 "sha512" 00:13:20.927 ], 00:13:20.927 "dhchap_dhgroups": [ 00:13:20.927 "null", 00:13:20.927 "ffdhe2048", 00:13:20.927 "ffdhe3072", 00:13:20.927 "ffdhe4096", 00:13:20.927 "ffdhe6144", 00:13:20.927 "ffdhe8192" 00:13:20.927 ] 00:13:20.927 } 00:13:20.927 }, 00:13:20.927 { 00:13:20.927 "method": "bdev_nvme_set_hotplug", 00:13:20.927 "params": { 00:13:20.927 "period_us": 100000, 00:13:20.927 "enable": false 00:13:20.927 } 00:13:20.927 }, 00:13:20.927 { 00:13:20.927 "method": "bdev_malloc_create", 00:13:20.927 "params": { 00:13:20.927 "name": "malloc0", 00:13:20.927 "num_blocks": 8192, 00:13:20.927 "block_size": 4096, 00:13:20.927 "physical_block_size": 4096, 00:13:20.927 "uuid": "438343ad-e1ef-4ad9-96bb-f7bfa553b349", 00:13:20.927 "optimal_io_boundary": 0, 00:13:20.927 "md_size": 0, 00:13:20.927 "dif_type": 0, 00:13:20.927 "dif_is_head_of_md": false, 00:13:20.927 "dif_pi_format": 0 00:13:20.927 } 00:13:20.927 }, 00:13:20.927 { 00:13:20.927 "method": "bdev_wait_for_examine" 00:13:20.927 } 00:13:20.927 ] 00:13:20.927 }, 00:13:20.927 { 00:13:20.927 "subsystem": "nbd", 00:13:20.927 "config": [] 00:13:20.927 }, 00:13:20.927 { 00:13:20.927 "subsystem": "scheduler", 00:13:20.927 "config": [ 00:13:20.927 { 00:13:20.927 "method": "framework_set_scheduler", 00:13:20.927 "params": { 00:13:20.927 "name": "static" 00:13:20.927 } 00:13:20.927 } 00:13:20.927 ] 00:13:20.927 }, 00:13:20.927 { 00:13:20.927 "subsystem": "nvmf", 00:13:20.927 "config": [ 00:13:20.927 { 00:13:20.927 "method": "nvmf_set_config", 00:13:20.927 "params": { 00:13:20.927 "discovery_filter": "match_any", 00:13:20.927 "admin_cmd_passthru": { 00:13:20.927 "identify_ctrlr": false 00:13:20.927 } 00:13:20.927 } 00:13:20.927 }, 00:13:20.927 { 00:13:20.927 "method": "nvmf_set_max_subsystems", 00:13:20.927 "params": { 00:13:20.927 "max_subsystems": 1024 00:13:20.927 } 00:13:20.927 }, 00:13:20.927 { 00:13:20.927 "method": "nvmf_set_crdt", 00:13:20.927 "params": { 00:13:20.927 "crdt1": 0, 00:13:20.927 "crdt2": 0, 00:13:20.927 "crdt3": 0 00:13:20.927 } 00:13:20.927 }, 00:13:20.927 { 00:13:20.927 "method": "nvmf_create_transport", 00:13:20.927 "params": { 00:13:20.927 "trtype": "TCP", 00:13:20.927 "max_queue_depth": 128, 00:13:20.927 "max_io_qpairs_per_ctrlr": 127, 00:13:20.927 "in_capsule_data_size": 4096, 00:13:20.927 "max_io_size": 131072, 00:13:20.927 "io_unit_size": 131072, 00:13:20.927 "max_aq_depth": 128, 00:13:20.927 "num_shared_buffers": 511, 00:13:20.927 "buf_cache_size": 4294967295, 00:13:20.927 "dif_insert_or_strip": false, 00:13:20.927 "zcopy": false, 00:13:20.927 "c2h_success": false, 00:13:20.927 "sock_priority": 0, 00:13:20.927 "abort_timeout_sec": 1, 00:13:20.927 "ack_timeout": 0, 00:13:20.927 "data_wr_pool_size": 0 00:13:20.927 } 00:13:20.927 }, 00:13:20.927 { 00:13:20.927 "method": "nvmf_create_subsystem", 00:13:20.927 "params": { 00:13:20.927 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:20.927 "allow_any_host": false, 00:13:20.927 "serial_number": "SPDK00000000000001", 00:13:20.927 "model_number": "SPDK bdev Controller", 00:13:20.927 "max_namespaces": 10, 00:13:20.927 "min_cntlid": 1, 00:13:20.927 "max_cntlid": 65519, 00:13:20.927 "ana_reporting": false 00:13:20.927 } 00:13:20.927 }, 00:13:20.927 { 00:13:20.927 "method": "nvmf_subsystem_add_host", 00:13:20.927 "params": { 00:13:20.927 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:20.927 "host": "nqn.2016-06.io.spdk:host1", 00:13:20.927 "psk": "/tmp/tmp.jq1FUXD1rm" 00:13:20.927 } 00:13:20.927 }, 00:13:20.927 { 00:13:20.927 "method": "nvmf_subsystem_add_ns", 00:13:20.927 "params": { 00:13:20.927 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:20.927 "namespace": { 00:13:20.927 "nsid": 1, 00:13:20.927 "bdev_name": "malloc0", 00:13:20.927 "nguid": "438343ADE1EF4AD996BBF7BFA553B349", 00:13:20.927 "uuid": "438343ad-e1ef-4ad9-96bb-f7bfa553b349", 00:13:20.927 "no_auto_visible": false 00:13:20.927 } 00:13:20.927 } 00:13:20.927 }, 00:13:20.927 { 00:13:20.927 "method": "nvmf_subsystem_add_listener", 00:13:20.927 "params": { 00:13:20.927 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:20.927 "listen_address": { 00:13:20.927 "trtype": "TCP", 00:13:20.927 "adrfam": "IPv4", 00:13:20.927 "traddr": "10.0.0.2", 00:13:20.927 "trsvcid": "4420" 00:13:20.927 }, 00:13:20.927 "secure_channel": true 00:13:20.927 } 00:13:20.927 } 00:13:20.928 ] 00:13:20.928 } 00:13:20.928 ] 00:13:20.928 }' 00:13:20.928 13:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:21.187 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:13:21.187 "subsystems": [ 00:13:21.187 { 00:13:21.187 "subsystem": "keyring", 00:13:21.187 "config": [] 00:13:21.187 }, 00:13:21.187 { 00:13:21.187 "subsystem": "iobuf", 00:13:21.187 "config": [ 00:13:21.187 { 00:13:21.187 "method": "iobuf_set_options", 00:13:21.187 "params": { 00:13:21.187 "small_pool_count": 8192, 00:13:21.187 "large_pool_count": 1024, 00:13:21.187 "small_bufsize": 8192, 00:13:21.187 "large_bufsize": 135168 00:13:21.187 } 00:13:21.187 } 00:13:21.187 ] 00:13:21.187 }, 00:13:21.187 { 00:13:21.187 "subsystem": "sock", 00:13:21.187 "config": [ 00:13:21.187 { 00:13:21.187 "method": "sock_set_default_impl", 00:13:21.187 "params": { 00:13:21.187 "impl_name": "uring" 00:13:21.187 } 00:13:21.187 }, 00:13:21.187 { 00:13:21.187 "method": "sock_impl_set_options", 00:13:21.187 "params": { 00:13:21.187 "impl_name": "ssl", 00:13:21.187 "recv_buf_size": 4096, 00:13:21.187 "send_buf_size": 4096, 00:13:21.187 "enable_recv_pipe": true, 00:13:21.187 "enable_quickack": false, 00:13:21.187 "enable_placement_id": 0, 00:13:21.187 "enable_zerocopy_send_server": true, 00:13:21.187 "enable_zerocopy_send_client": false, 00:13:21.187 "zerocopy_threshold": 0, 00:13:21.187 "tls_version": 0, 00:13:21.187 "enable_ktls": false 00:13:21.187 } 00:13:21.187 }, 00:13:21.187 { 00:13:21.187 "method": "sock_impl_set_options", 00:13:21.187 "params": { 00:13:21.187 "impl_name": "posix", 00:13:21.187 "recv_buf_size": 2097152, 00:13:21.187 "send_buf_size": 2097152, 00:13:21.187 "enable_recv_pipe": true, 00:13:21.187 "enable_quickack": false, 00:13:21.187 "enable_placement_id": 0, 00:13:21.187 "enable_zerocopy_send_server": true, 00:13:21.187 "enable_zerocopy_send_client": false, 00:13:21.187 "zerocopy_threshold": 0, 00:13:21.187 "tls_version": 0, 00:13:21.187 "enable_ktls": false 00:13:21.187 } 00:13:21.187 }, 00:13:21.187 { 00:13:21.187 "method": "sock_impl_set_options", 00:13:21.187 "params": { 00:13:21.187 "impl_name": "uring", 00:13:21.187 "recv_buf_size": 2097152, 00:13:21.187 "send_buf_size": 2097152, 00:13:21.187 "enable_recv_pipe": true, 00:13:21.187 "enable_quickack": false, 00:13:21.187 "enable_placement_id": 0, 00:13:21.187 "enable_zerocopy_send_server": false, 00:13:21.187 "enable_zerocopy_send_client": false, 00:13:21.187 "zerocopy_threshold": 0, 00:13:21.187 "tls_version": 0, 00:13:21.187 "enable_ktls": false 00:13:21.187 } 00:13:21.187 } 00:13:21.187 ] 00:13:21.187 }, 00:13:21.187 { 00:13:21.187 "subsystem": "vmd", 00:13:21.187 "config": [] 00:13:21.187 }, 00:13:21.187 { 00:13:21.187 "subsystem": "accel", 00:13:21.187 "config": [ 00:13:21.187 { 00:13:21.187 "method": "accel_set_options", 00:13:21.187 "params": { 00:13:21.187 "small_cache_size": 128, 00:13:21.187 "large_cache_size": 16, 00:13:21.187 "task_count": 2048, 00:13:21.187 "sequence_count": 2048, 00:13:21.187 "buf_count": 2048 00:13:21.187 } 00:13:21.187 } 00:13:21.187 ] 00:13:21.187 }, 00:13:21.187 { 00:13:21.187 "subsystem": "bdev", 00:13:21.187 "config": [ 00:13:21.187 { 00:13:21.187 "method": "bdev_set_options", 00:13:21.187 "params": { 00:13:21.187 "bdev_io_pool_size": 65535, 00:13:21.187 "bdev_io_cache_size": 256, 00:13:21.187 "bdev_auto_examine": true, 00:13:21.187 "iobuf_small_cache_size": 128, 00:13:21.187 "iobuf_large_cache_size": 16 00:13:21.187 } 00:13:21.187 }, 00:13:21.187 { 00:13:21.187 "method": "bdev_raid_set_options", 00:13:21.187 "params": { 00:13:21.187 "process_window_size_kb": 1024, 00:13:21.187 "process_max_bandwidth_mb_sec": 0 00:13:21.187 } 00:13:21.187 }, 00:13:21.187 { 00:13:21.187 "method": "bdev_iscsi_set_options", 00:13:21.187 "params": { 00:13:21.187 "timeout_sec": 30 00:13:21.187 } 00:13:21.187 }, 00:13:21.187 { 00:13:21.187 "method": "bdev_nvme_set_options", 00:13:21.187 "params": { 00:13:21.187 "action_on_timeout": "none", 00:13:21.187 "timeout_us": 0, 00:13:21.187 "timeout_admin_us": 0, 00:13:21.187 "keep_alive_timeout_ms": 10000, 00:13:21.187 "arbitration_burst": 0, 00:13:21.187 "low_priority_weight": 0, 00:13:21.187 "medium_priority_weight": 0, 00:13:21.187 "high_priority_weight": 0, 00:13:21.187 "nvme_adminq_poll_period_us": 10000, 00:13:21.187 "nvme_ioq_poll_period_us": 0, 00:13:21.187 "io_queue_requests": 512, 00:13:21.187 "delay_cmd_submit": true, 00:13:21.187 "transport_retry_count": 4, 00:13:21.187 "bdev_retry_count": 3, 00:13:21.187 "transport_ack_timeout": 0, 00:13:21.187 "ctrlr_loss_timeout_sec": 0, 00:13:21.187 "reconnect_delay_sec": 0, 00:13:21.187 "fast_io_fail_timeout_sec": 0, 00:13:21.187 "disable_auto_failback": false, 00:13:21.187 "generate_uuids": false, 00:13:21.187 "transport_tos": 0, 00:13:21.187 "nvme_error_stat": false, 00:13:21.187 "rdma_srq_size": 0, 00:13:21.187 "io_path_stat": false, 00:13:21.187 "allow_accel_sequence": false, 00:13:21.187 "rdma_max_cq_size": 0, 00:13:21.187 "rdma_cm_event_timeout_ms": 0, 00:13:21.187 "dhchap_digests": [ 00:13:21.187 "sha256", 00:13:21.187 "sha384", 00:13:21.187 "sha512" 00:13:21.187 ], 00:13:21.187 "dhchap_dhgroups": [ 00:13:21.187 "null", 00:13:21.187 "ffdhe2048", 00:13:21.187 "ffdhe3072", 00:13:21.187 "ffdhe4096", 00:13:21.187 "ffdhe6144", 00:13:21.187 "ffdhe8192" 00:13:21.187 ] 00:13:21.187 } 00:13:21.187 }, 00:13:21.187 { 00:13:21.187 "method": "bdev_nvme_attach_controller", 00:13:21.187 "params": { 00:13:21.187 "name": "TLSTEST", 00:13:21.188 "trtype": "TCP", 00:13:21.188 "adrfam": "IPv4", 00:13:21.188 "traddr": "10.0.0.2", 00:13:21.188 "trsvcid": "4420", 00:13:21.188 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:21.188 "prchk_reftag": false, 00:13:21.188 "prchk_guard": false, 00:13:21.188 "ctrlr_loss_timeout_sec": 0, 00:13:21.188 "reconnect_delay_sec": 0, 00:13:21.188 "fast_io_fail_timeout_sec": 0, 00:13:21.188 "psk": "/tmp/tmp.jq1FUXD1rm", 00:13:21.188 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:21.188 "hdgst": false, 00:13:21.188 "ddgst": false 00:13:21.188 } 00:13:21.188 }, 00:13:21.188 { 00:13:21.188 "method": "bdev_nvme_set_hotplug", 00:13:21.188 "params": { 00:13:21.188 "period_us": 100000, 00:13:21.188 "enable": false 00:13:21.188 } 00:13:21.188 }, 00:13:21.188 { 00:13:21.188 "method": "bdev_wait_for_examine" 00:13:21.188 } 00:13:21.188 ] 00:13:21.188 }, 00:13:21.188 { 00:13:21.188 "subsystem": "nbd", 00:13:21.188 "config": [] 00:13:21.188 } 00:13:21.188 ] 00:13:21.188 }' 00:13:21.188 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 72685 00:13:21.188 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72685 ']' 00:13:21.188 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72685 00:13:21.188 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:21.188 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:21.188 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72685 00:13:21.188 killing process with pid 72685 00:13:21.188 Received shutdown signal, test time was about 10.000000 seconds 00:13:21.188 00:13:21.188 Latency(us) 00:13:21.188 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:21.188 =================================================================================================================== 00:13:21.188 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:21.188 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:21.188 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:21.188 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72685' 00:13:21.188 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72685 00:13:21.188 [2024-07-25 13:09:13.228506] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:21.188 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72685 00:13:21.446 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 72636 00:13:21.446 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72636 ']' 00:13:21.446 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72636 00:13:21.446 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:21.446 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:21.446 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72636 00:13:21.446 killing process with pid 72636 00:13:21.446 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:21.446 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:21.446 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72636' 00:13:21.446 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72636 00:13:21.446 [2024-07-25 13:09:13.466586] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:21.446 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72636 00:13:21.704 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:13:21.704 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:21.704 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:21.704 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:21.704 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:13:21.704 "subsystems": [ 00:13:21.704 { 00:13:21.704 "subsystem": "keyring", 00:13:21.704 "config": [] 00:13:21.704 }, 00:13:21.704 { 00:13:21.704 "subsystem": "iobuf", 00:13:21.704 "config": [ 00:13:21.704 { 00:13:21.704 "method": "iobuf_set_options", 00:13:21.704 "params": { 00:13:21.704 "small_pool_count": 8192, 00:13:21.704 "large_pool_count": 1024, 00:13:21.704 "small_bufsize": 8192, 00:13:21.704 "large_bufsize": 135168 00:13:21.704 } 00:13:21.704 } 00:13:21.704 ] 00:13:21.704 }, 00:13:21.704 { 00:13:21.704 "subsystem": "sock", 00:13:21.704 "config": [ 00:13:21.704 { 00:13:21.704 "method": "sock_set_default_impl", 00:13:21.704 "params": { 00:13:21.704 "impl_name": "uring" 00:13:21.704 } 00:13:21.704 }, 00:13:21.704 { 00:13:21.704 "method": "sock_impl_set_options", 00:13:21.705 "params": { 00:13:21.705 "impl_name": "ssl", 00:13:21.705 "recv_buf_size": 4096, 00:13:21.705 "send_buf_size": 4096, 00:13:21.705 "enable_recv_pipe": true, 00:13:21.705 "enable_quickack": false, 00:13:21.705 "enable_placement_id": 0, 00:13:21.705 "enable_zerocopy_send_server": true, 00:13:21.705 "enable_zerocopy_send_client": false, 00:13:21.705 "zerocopy_threshold": 0, 00:13:21.705 "tls_version": 0, 00:13:21.705 "enable_ktls": false 00:13:21.705 } 00:13:21.705 }, 00:13:21.705 { 00:13:21.705 "method": "sock_impl_set_options", 00:13:21.705 "params": { 00:13:21.705 "impl_name": "posix", 00:13:21.705 "recv_buf_size": 2097152, 00:13:21.705 "send_buf_size": 2097152, 00:13:21.705 "enable_recv_pipe": true, 00:13:21.705 "enable_quickack": false, 00:13:21.705 "enable_placement_id": 0, 00:13:21.705 "enable_zerocopy_send_server": true, 00:13:21.705 "enable_zerocopy_send_client": false, 00:13:21.705 "zerocopy_threshold": 0, 00:13:21.705 "tls_version": 0, 00:13:21.705 "enable_ktls": false 00:13:21.705 } 00:13:21.705 }, 00:13:21.705 { 00:13:21.705 "method": "sock_impl_set_options", 00:13:21.705 "params": { 00:13:21.705 "impl_name": "uring", 00:13:21.705 "recv_buf_size": 2097152, 00:13:21.705 "send_buf_size": 2097152, 00:13:21.705 "enable_recv_pipe": true, 00:13:21.705 "enable_quickack": false, 00:13:21.705 "enable_placement_id": 0, 00:13:21.705 "enable_zerocopy_send_server": false, 00:13:21.705 "enable_zerocopy_send_client": false, 00:13:21.705 "zerocopy_threshold": 0, 00:13:21.705 "tls_version": 0, 00:13:21.705 "enable_ktls": false 00:13:21.705 } 00:13:21.705 } 00:13:21.705 ] 00:13:21.705 }, 00:13:21.705 { 00:13:21.705 "subsystem": "vmd", 00:13:21.705 "config": [] 00:13:21.705 }, 00:13:21.705 { 00:13:21.705 "subsystem": "accel", 00:13:21.705 "config": [ 00:13:21.705 { 00:13:21.705 "method": "accel_set_options", 00:13:21.705 "params": { 00:13:21.705 "small_cache_size": 128, 00:13:21.705 "large_cache_size": 16, 00:13:21.705 "task_count": 2048, 00:13:21.705 "sequence_count": 2048, 00:13:21.705 "buf_count": 2048 00:13:21.705 } 00:13:21.705 } 00:13:21.705 ] 00:13:21.705 }, 00:13:21.705 { 00:13:21.705 "subsystem": "bdev", 00:13:21.705 "config": [ 00:13:21.705 { 00:13:21.705 "method": "bdev_set_options", 00:13:21.705 "params": { 00:13:21.705 "bdev_io_pool_size": 65535, 00:13:21.705 "bdev_io_cache_size": 256, 00:13:21.705 "bdev_auto_examine": true, 00:13:21.705 "iobuf_small_cache_size": 128, 00:13:21.705 "iobuf_large_cache_size": 16 00:13:21.705 } 00:13:21.705 }, 00:13:21.705 { 00:13:21.705 "method": "bdev_raid_set_options", 00:13:21.705 "params": { 00:13:21.705 "process_window_size_kb": 1024, 00:13:21.705 "process_max_bandwidth_mb_sec": 0 00:13:21.705 } 00:13:21.705 }, 00:13:21.705 { 00:13:21.705 "method": "bdev_iscsi_set_options", 00:13:21.705 "params": { 00:13:21.705 "timeout_sec": 30 00:13:21.705 } 00:13:21.705 }, 00:13:21.705 { 00:13:21.705 "method": "bdev_nvme_set_options", 00:13:21.705 "params": { 00:13:21.705 "action_on_timeout": "none", 00:13:21.705 "timeout_us": 0, 00:13:21.705 "timeout_admin_us": 0, 00:13:21.705 "keep_alive_timeout_ms": 10000, 00:13:21.705 "arbitration_burst": 0, 00:13:21.705 "low_priority_weight": 0, 00:13:21.705 "medium_priority_weight": 0, 00:13:21.705 "high_priority_weight": 0, 00:13:21.705 "nvme_adminq_poll_period_us": 10000, 00:13:21.705 "nvme_ioq_poll_period_us": 0, 00:13:21.705 "io_queue_requests": 0, 00:13:21.705 "delay_cmd_submit": true, 00:13:21.705 "transport_retry_count": 4, 00:13:21.705 "bdev_retry_count": 3, 00:13:21.705 "transport_ack_timeout": 0, 00:13:21.705 "ctrlr_loss_timeout_sec": 0, 00:13:21.705 "reconnect_delay_sec": 0, 00:13:21.705 "fast_io_fail_timeout_sec": 0, 00:13:21.705 "disable_auto_failback": false, 00:13:21.705 "generate_uuids": false, 00:13:21.705 "transport_tos": 0, 00:13:21.705 "nvme_error_stat": false, 00:13:21.705 "rdma_srq_size": 0, 00:13:21.705 "io_path_stat": false, 00:13:21.705 "allow_accel_sequence": false, 00:13:21.705 "rdma_max_cq_size": 0, 00:13:21.705 "rdma_cm_event_timeout_ms": 0, 00:13:21.705 "dhchap_digests": [ 00:13:21.705 "sha256", 00:13:21.705 "sha384", 00:13:21.705 "sha512" 00:13:21.705 ], 00:13:21.705 "dhchap_dhgroups": [ 00:13:21.705 "null", 00:13:21.705 "ffdhe2048", 00:13:21.705 "ffdhe3072", 00:13:21.705 "ffdhe4096", 00:13:21.705 "ffdhe6144", 00:13:21.705 "ffdhe8192" 00:13:21.705 ] 00:13:21.705 } 00:13:21.705 }, 00:13:21.705 { 00:13:21.705 "method": "bdev_nvme_set_hotplug", 00:13:21.705 "params": { 00:13:21.705 "period_us": 100000, 00:13:21.705 "enable": false 00:13:21.705 } 00:13:21.705 }, 00:13:21.705 { 00:13:21.705 "method": "bdev_malloc_create", 00:13:21.705 "params": { 00:13:21.705 "name": "malloc0", 00:13:21.705 "num_blocks": 8192, 00:13:21.705 "block_size": 4096, 00:13:21.705 "physical_block_size": 4096, 00:13:21.705 "uuid": "438343ad-e1ef-4ad9-96bb-f7bfa553b349", 00:13:21.705 "optimal_io_boundary": 0, 00:13:21.705 "md_size": 0, 00:13:21.705 "dif_type": 0, 00:13:21.705 "dif_is_head_of_md": false, 00:13:21.705 "dif_pi_format": 0 00:13:21.705 } 00:13:21.705 }, 00:13:21.705 { 00:13:21.705 "method": "bdev_wait_for_examine" 00:13:21.705 } 00:13:21.705 ] 00:13:21.705 }, 00:13:21.705 { 00:13:21.705 "subsystem": "nbd", 00:13:21.706 "config": [] 00:13:21.706 }, 00:13:21.706 { 00:13:21.706 "subsystem": "scheduler", 00:13:21.706 "config": [ 00:13:21.706 { 00:13:21.706 "method": "framework_set_scheduler", 00:13:21.706 "params": { 00:13:21.706 "name": "static" 00:13:21.706 } 00:13:21.706 } 00:13:21.706 ] 00:13:21.706 }, 00:13:21.706 { 00:13:21.706 "subsystem": "nvmf", 00:13:21.706 "config": [ 00:13:21.706 { 00:13:21.706 "method": "nvmf_set_config", 00:13:21.706 "params": { 00:13:21.706 "discovery_filter": "match_any", 00:13:21.706 "admin_cmd_passthru": { 00:13:21.706 "identify_ctrlr": false 00:13:21.706 } 00:13:21.706 } 00:13:21.706 }, 00:13:21.706 { 00:13:21.706 "method": "nvmf_set_max_subsystems", 00:13:21.706 "params": { 00:13:21.706 "max_subsystems": 1024 00:13:21.706 } 00:13:21.706 }, 00:13:21.706 { 00:13:21.706 "method": "nvmf_set_crdt", 00:13:21.706 "params": { 00:13:21.706 "crdt1": 0, 00:13:21.706 "crdt2": 0, 00:13:21.706 "crdt3": 0 00:13:21.706 } 00:13:21.706 }, 00:13:21.706 { 00:13:21.706 "method": "nvmf_create_transport", 00:13:21.706 "params": { 00:13:21.706 "trtype": "TCP", 00:13:21.706 "max_queue_depth": 128, 00:13:21.706 "max_io_qpairs_per_ctrlr": 127, 00:13:21.706 "in_capsule_data_size": 4096, 00:13:21.706 "max_io_size": 131072, 00:13:21.706 "io_unit_size": 131072, 00:13:21.706 "max_aq_depth": 128, 00:13:21.706 "num_shared_buffers": 511, 00:13:21.706 "buf_cache_size": 4294967295, 00:13:21.706 "dif_insert_or_strip": false, 00:13:21.706 "zcopy": false, 00:13:21.706 "c2h_success": false, 00:13:21.706 "sock_priority": 0, 00:13:21.706 "abort_timeout_sec": 1, 00:13:21.706 "ack_timeout": 0, 00:13:21.706 "data_wr_pool_size": 0 00:13:21.706 } 00:13:21.706 }, 00:13:21.706 { 00:13:21.706 "method": "nvmf_create_subsystem", 00:13:21.706 "params": { 00:13:21.706 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:21.706 "allow_any_host": false, 00:13:21.706 "serial_number": "SPDK00000000000001", 00:13:21.706 "model_number": "SPDK bdev Controller", 00:13:21.706 "max_namespaces": 10, 00:13:21.706 "min_cntlid": 1, 00:13:21.706 "max_cntlid": 65519, 00:13:21.706 "ana_reporting": false 00:13:21.706 } 00:13:21.706 }, 00:13:21.706 { 00:13:21.706 "method": "nvmf_subsystem_add_host", 00:13:21.706 "params": { 00:13:21.706 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:21.706 "host": "nqn.2016-06.io.spdk:host1", 00:13:21.706 "psk": "/tmp/tmp.jq1FUXD1rm" 00:13:21.706 } 00:13:21.706 }, 00:13:21.706 { 00:13:21.706 "method": "nvmf_subsystem_add_ns", 00:13:21.706 "params": { 00:13:21.706 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:21.706 "namespace": { 00:13:21.706 "nsid": 1, 00:13:21.706 "bdev_name": "malloc0", 00:13:21.706 "nguid": "438343ADE1EF4AD996BBF7BFA553B349", 00:13:21.706 "uuid": "438343ad-e1ef-4ad9-96bb-f7bfa553b349", 00:13:21.706 "no_auto_visible": false 00:13:21.706 } 00:13:21.706 } 00:13:21.706 }, 00:13:21.706 { 00:13:21.706 "method": "nvmf_subsystem_add_listener", 00:13:21.706 "params": { 00:13:21.706 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:21.706 "listen_address": { 00:13:21.706 "trtype": "TCP", 00:13:21.706 "adrfam": "IPv4", 00:13:21.706 "traddr": "10.0.0.2", 00:13:21.706 "trsvcid": "4420" 00:13:21.706 }, 00:13:21.706 "secure_channel": true 00:13:21.706 } 00:13:21.706 } 00:13:21.706 ] 00:13:21.706 } 00:13:21.706 ] 00:13:21.706 }' 00:13:21.706 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72726 00:13:21.706 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:13:21.706 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72726 00:13:21.706 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72726 ']' 00:13:21.706 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.706 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:21.706 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.706 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:21.706 13:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:21.706 [2024-07-25 13:09:13.742242] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:21.706 [2024-07-25 13:09:13.742509] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.706 [2024-07-25 13:09:13.872190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.965 [2024-07-25 13:09:13.957502] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:21.965 [2024-07-25 13:09:13.957561] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:21.965 [2024-07-25 13:09:13.957589] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:21.965 [2024-07-25 13:09:13.957597] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:21.965 [2024-07-25 13:09:13.957604] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:21.965 [2024-07-25 13:09:13.957691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.965 [2024-07-25 13:09:14.123309] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:22.224 [2024-07-25 13:09:14.189205] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:22.224 [2024-07-25 13:09:14.205074] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:22.224 [2024-07-25 13:09:14.221115] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:22.224 [2024-07-25 13:09:14.230001] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.792 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:22.792 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:22.792 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:22.792 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:22.792 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:22.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:22.792 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:22.792 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=72758 00:13:22.792 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 72758 /var/tmp/bdevperf.sock 00:13:22.792 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72758 ']' 00:13:22.792 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:22.792 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:13:22.792 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:22.792 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:22.792 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:22.792 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:13:22.792 "subsystems": [ 00:13:22.792 { 00:13:22.792 "subsystem": "keyring", 00:13:22.792 "config": [] 00:13:22.792 }, 00:13:22.792 { 00:13:22.792 "subsystem": "iobuf", 00:13:22.792 "config": [ 00:13:22.792 { 00:13:22.792 "method": "iobuf_set_options", 00:13:22.792 "params": { 00:13:22.792 "small_pool_count": 8192, 00:13:22.792 "large_pool_count": 1024, 00:13:22.792 "small_bufsize": 8192, 00:13:22.792 "large_bufsize": 135168 00:13:22.792 } 00:13:22.792 } 00:13:22.792 ] 00:13:22.792 }, 00:13:22.792 { 00:13:22.792 "subsystem": "sock", 00:13:22.792 "config": [ 00:13:22.792 { 00:13:22.792 "method": "sock_set_default_impl", 00:13:22.792 "params": { 00:13:22.792 "impl_name": "uring" 00:13:22.792 } 00:13:22.793 }, 00:13:22.793 { 00:13:22.793 "method": "sock_impl_set_options", 00:13:22.793 "params": { 00:13:22.793 "impl_name": "ssl", 00:13:22.793 "recv_buf_size": 4096, 00:13:22.793 "send_buf_size": 4096, 00:13:22.793 "enable_recv_pipe": true, 00:13:22.793 "enable_quickack": false, 00:13:22.793 "enable_placement_id": 0, 00:13:22.793 "enable_zerocopy_send_server": true, 00:13:22.793 "enable_zerocopy_send_client": false, 00:13:22.793 "zerocopy_threshold": 0, 00:13:22.793 "tls_version": 0, 00:13:22.793 "enable_ktls": false 00:13:22.793 } 00:13:22.793 }, 00:13:22.793 { 00:13:22.793 "method": "sock_impl_set_options", 00:13:22.793 "params": { 00:13:22.793 "impl_name": "posix", 00:13:22.793 "recv_buf_size": 2097152, 00:13:22.793 "send_buf_size": 2097152, 00:13:22.793 "enable_recv_pipe": true, 00:13:22.793 "enable_quickack": false, 00:13:22.793 "enable_placement_id": 0, 00:13:22.793 "enable_zerocopy_send_server": true, 00:13:22.793 "enable_zerocopy_send_client": false, 00:13:22.793 "zerocopy_threshold": 0, 00:13:22.793 "tls_version": 0, 00:13:22.793 "enable_ktls": false 00:13:22.793 } 00:13:22.793 }, 00:13:22.793 { 00:13:22.793 "method": "sock_impl_set_options", 00:13:22.793 "params": { 00:13:22.793 "impl_name": "uring", 00:13:22.793 "recv_buf_size": 2097152, 00:13:22.793 "send_buf_size": 2097152, 00:13:22.793 "enable_recv_pipe": true, 00:13:22.793 "enable_quickack": false, 00:13:22.793 "enable_placement_id": 0, 00:13:22.793 "enable_zerocopy_send_server": false, 00:13:22.793 "enable_zerocopy_send_client": false, 00:13:22.793 "zerocopy_threshold": 0, 00:13:22.793 "tls_version": 0, 00:13:22.793 "enable_ktls": false 00:13:22.793 } 00:13:22.793 } 00:13:22.793 ] 00:13:22.793 }, 00:13:22.793 { 00:13:22.793 "subsystem": "vmd", 00:13:22.793 "config": [] 00:13:22.793 }, 00:13:22.793 { 00:13:22.793 "subsystem": "accel", 00:13:22.793 "config": [ 00:13:22.793 { 00:13:22.793 "method": "accel_set_options", 00:13:22.793 "params": { 00:13:22.793 "small_cache_size": 128, 00:13:22.793 "large_cache_size": 16, 00:13:22.793 "task_count": 2048, 00:13:22.793 "sequence_count": 2048, 00:13:22.793 "buf_count": 2048 00:13:22.793 } 00:13:22.793 } 00:13:22.793 ] 00:13:22.793 }, 00:13:22.793 { 00:13:22.793 "subsystem": "bdev", 00:13:22.793 "config": [ 00:13:22.793 { 00:13:22.793 "method": "bdev_set_options", 00:13:22.793 "params": { 00:13:22.793 "bdev_io_pool_size": 65535, 00:13:22.793 "bdev_io_cache_size": 256, 00:13:22.793 "bdev_auto_examine": true, 00:13:22.793 "iobuf_small_cache_size": 128, 00:13:22.793 "iobuf_large_cache_size": 16 00:13:22.793 } 00:13:22.793 }, 00:13:22.793 { 00:13:22.793 "method": "bdev_raid_set_options", 00:13:22.793 "params": { 00:13:22.793 "process_window_size_kb": 1024, 00:13:22.793 "process_max_bandwidth_mb_sec": 0 00:13:22.793 } 00:13:22.793 }, 00:13:22.793 { 00:13:22.793 "method": "bdev_iscsi_set_options", 00:13:22.793 "params": { 00:13:22.793 "timeout_sec": 30 00:13:22.793 } 00:13:22.793 }, 00:13:22.793 { 00:13:22.793 "method": "bdev_nvme_set_options", 00:13:22.793 "params": { 00:13:22.793 "action_on_timeout": "none", 00:13:22.793 "timeout_us": 0, 00:13:22.793 "timeout_admin_us": 0, 00:13:22.793 "keep_alive_timeout_ms": 10000, 00:13:22.793 "arbitration_burst": 0, 00:13:22.793 "low_priority_weight": 0, 00:13:22.793 "medium_priority_weight": 0, 00:13:22.793 "high_priority_weight": 0, 00:13:22.793 "nvme_adminq_poll_period_us": 10000, 00:13:22.793 "nvme_ioq_poll_period_us": 0, 00:13:22.793 "io_queue_requests": 512, 00:13:22.793 "delay_cmd_submit": true, 00:13:22.793 "transport_retry_count": 4, 00:13:22.793 "bdev_retry_count": 3, 00:13:22.793 "transport_ack_timeout": 0, 00:13:22.793 "ctrlr_loss_timeout_sec": 0, 00:13:22.793 "reconnect_delay_sec": 0, 00:13:22.793 "fast_io_fail_timeout_sec": 0, 00:13:22.793 "disable_auto_failback": false, 00:13:22.793 "generate_uuids": false, 00:13:22.793 "transport_tos": 0, 00:13:22.793 "nvme_error_stat": false, 00:13:22.793 "rdma_srq_size": 0, 00:13:22.793 "io_path_stat": false, 00:13:22.793 "allow_accel_sequence": false, 00:13:22.793 "rdma_max_cq_size": 0, 00:13:22.793 "rdma_cm_event_timeout_ms": 0, 00:13:22.793 "dhchap_digests": [ 00:13:22.793 "sha256", 00:13:22.793 "sha384", 00:13:22.793 "sha512" 00:13:22.793 ], 00:13:22.793 "dhchap_dhgroups": [ 00:13:22.793 "null", 00:13:22.793 "ffdhe2048", 00:13:22.793 "ffdhe3072", 00:13:22.793 "ffdhe4096", 00:13:22.793 "ffdhe6144", 00:13:22.793 "ffdhe8192" 00:13:22.793 ] 00:13:22.793 } 00:13:22.793 }, 00:13:22.793 { 00:13:22.793 "method": "bdev_nvme_attach_controller", 00:13:22.793 "params": { 00:13:22.793 "name": "TLSTEST", 00:13:22.793 "trtype": "TCP", 00:13:22.793 "adrfam": "IPv4", 00:13:22.793 "traddr": "10.0.0.2", 00:13:22.793 "trsvcid": "4420", 00:13:22.793 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:22.793 "prchk_reftag": false, 00:13:22.793 "prchk_guard": false, 00:13:22.793 "ctrlr_loss_timeout_sec": 0, 00:13:22.793 "reconnect_delay_sec": 0, 00:13:22.793 "fast_io_fail_timeout_sec": 0, 00:13:22.793 "psk": "/tmp/tmp.jq1FUXD1rm", 00:13:22.793 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:22.793 "hdgst": false, 00:13:22.793 "ddgst": false 00:13:22.793 } 00:13:22.793 }, 00:13:22.793 { 00:13:22.793 "method": "bdev_nvme_set_hotplug", 00:13:22.793 "params": { 00:13:22.793 "period_us": 100000, 00:13:22.793 "enable": false 00:13:22.793 } 00:13:22.793 }, 00:13:22.793 { 00:13:22.793 "method": "bdev_wait_for_examine" 00:13:22.793 } 00:13:22.793 ] 00:13:22.793 }, 00:13:22.793 { 00:13:22.793 "subsystem": "nbd", 00:13:22.793 "config": [] 00:13:22.793 } 00:13:22.793 ] 00:13:22.793 }' 00:13:22.793 13:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:22.793 [2024-07-25 13:09:14.809868] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:22.793 [2024-07-25 13:09:14.810307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72758 ] 00:13:22.793 [2024-07-25 13:09:14.952362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.052 [2024-07-25 13:09:15.054949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.052 [2024-07-25 13:09:15.189139] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:23.052 [2024-07-25 13:09:15.226976] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:23.052 [2024-07-25 13:09:15.227366] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:23.620 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:23.620 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:23.620 13:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:23.620 Running I/O for 10 seconds... 00:13:35.824 00:13:35.824 Latency(us) 00:13:35.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.824 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:35.824 Verification LBA range: start 0x0 length 0x2000 00:13:35.824 TLSTESTn1 : 10.03 3738.47 14.60 0.00 0.00 34164.79 8817.57 32172.22 00:13:35.824 =================================================================================================================== 00:13:35.824 Total : 3738.47 14.60 0.00 0.00 34164.79 8817.57 32172.22 00:13:35.824 0 00:13:35.824 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:35.824 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 72758 00:13:35.824 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72758 ']' 00:13:35.824 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72758 00:13:35.824 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:35.824 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:35.824 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72758 00:13:35.824 killing process with pid 72758 00:13:35.824 Received shutdown signal, test time was about 10.000000 seconds 00:13:35.824 00:13:35.824 Latency(us) 00:13:35.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.824 =================================================================================================================== 00:13:35.824 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:35.824 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:35.824 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:35.824 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72758' 00:13:35.824 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72758 00:13:35.824 [2024-07-25 13:09:25.874130] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:35.824 13:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72758 00:13:35.824 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 72726 00:13:35.824 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72726 ']' 00:13:35.824 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72726 00:13:35.824 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:35.824 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:35.824 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72726 00:13:35.824 killing process with pid 72726 00:13:35.824 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:35.824 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:35.824 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72726' 00:13:35.824 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72726 00:13:35.824 [2024-07-25 13:09:26.111316] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:35.824 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72726 00:13:35.824 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:13:35.824 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:35.824 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:35.824 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:35.825 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:35.825 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72892 00:13:35.825 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72892 00:13:35.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.825 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72892 ']' 00:13:35.825 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.825 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:35.825 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.825 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:35.825 13:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:35.825 [2024-07-25 13:09:26.379467] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:35.825 [2024-07-25 13:09:26.379683] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:35.825 [2024-07-25 13:09:26.516175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.825 [2024-07-25 13:09:26.614118] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:35.825 [2024-07-25 13:09:26.614186] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:35.825 [2024-07-25 13:09:26.614213] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:35.825 [2024-07-25 13:09:26.614224] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:35.825 [2024-07-25 13:09:26.614244] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:35.825 [2024-07-25 13:09:26.614292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.825 [2024-07-25 13:09:26.670384] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:35.825 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:35.825 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:35.825 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:35.825 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:35.825 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:35.825 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:35.825 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.jq1FUXD1rm 00:13:35.825 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.jq1FUXD1rm 00:13:35.825 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:35.825 [2024-07-25 13:09:27.608326] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:35.825 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:35.825 13:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:36.084 [2024-07-25 13:09:28.068410] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:36.084 [2024-07-25 13:09:28.068616] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.084 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:36.343 malloc0 00:13:36.343 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:36.602 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jq1FUXD1rm 00:13:36.860 [2024-07-25 13:09:28.806831] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:36.860 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=72947 00:13:36.860 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:13:36.860 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:36.860 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 72947 /var/tmp/bdevperf.sock 00:13:36.860 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72947 ']' 00:13:36.860 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:36.860 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:36.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:36.860 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:36.860 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:36.860 13:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:36.860 [2024-07-25 13:09:28.879973] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:36.860 [2024-07-25 13:09:28.880070] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72947 ] 00:13:36.860 [2024-07-25 13:09:29.018088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.118 [2024-07-25 13:09:29.102404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.118 [2024-07-25 13:09:29.155927] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:37.685 13:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:37.685 13:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:37.685 13:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jq1FUXD1rm 00:13:37.944 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:38.203 [2024-07-25 13:09:30.202982] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:38.203 nvme0n1 00:13:38.203 13:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:38.203 Running I/O for 1 seconds... 00:13:39.581 00:13:39.581 Latency(us) 00:13:39.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:39.581 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:39.581 Verification LBA range: start 0x0 length 0x2000 00:13:39.581 nvme0n1 : 1.01 4074.58 15.92 0.00 0.00 31176.37 4230.05 26214.40 00:13:39.581 =================================================================================================================== 00:13:39.581 Total : 4074.58 15.92 0.00 0.00 31176.37 4230.05 26214.40 00:13:39.581 0 00:13:39.581 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 72947 00:13:39.581 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72947 ']' 00:13:39.581 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72947 00:13:39.581 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:39.581 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:39.581 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72947 00:13:39.581 killing process with pid 72947 00:13:39.581 Received shutdown signal, test time was about 1.000000 seconds 00:13:39.581 00:13:39.581 Latency(us) 00:13:39.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:39.581 =================================================================================================================== 00:13:39.581 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:39.581 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:39.581 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:39.581 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72947' 00:13:39.581 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72947 00:13:39.581 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72947 00:13:39.581 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 72892 00:13:39.581 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72892 ']' 00:13:39.581 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72892 00:13:39.581 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:39.581 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:39.581 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72892 00:13:39.581 killing process with pid 72892 00:13:39.581 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:39.581 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:39.581 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72892' 00:13:39.581 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72892 00:13:39.581 [2024-07-25 13:09:31.710279] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:39.581 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72892 00:13:39.840 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:13:39.840 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:39.840 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:39.840 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:39.840 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72998 00:13:39.840 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:39.840 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72998 00:13:39.840 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72998 ']' 00:13:39.840 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.840 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:39.840 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.840 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:39.840 13:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:39.840 [2024-07-25 13:09:32.008603] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:39.840 [2024-07-25 13:09:32.008711] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.099 [2024-07-25 13:09:32.149097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.099 [2024-07-25 13:09:32.229867] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.099 [2024-07-25 13:09:32.229966] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.099 [2024-07-25 13:09:32.229978] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:40.099 [2024-07-25 13:09:32.229985] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:40.099 [2024-07-25 13:09:32.229992] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.099 [2024-07-25 13:09:32.230018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.099 [2024-07-25 13:09:32.283883] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:41.057 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:41.057 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:41.057 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:41.057 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:41.057 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:41.057 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:41.057 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:13:41.057 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.057 13:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:41.057 [2024-07-25 13:09:32.960604] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:41.057 malloc0 00:13:41.057 [2024-07-25 13:09:32.991975] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:41.057 [2024-07-25 13:09:32.992340] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.057 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.057 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=73030 00:13:41.057 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:13:41.057 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 73030 /var/tmp/bdevperf.sock 00:13:41.057 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73030 ']' 00:13:41.057 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:41.057 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:41.057 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:41.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:41.057 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:41.057 13:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:41.057 [2024-07-25 13:09:33.069807] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:41.057 [2024-07-25 13:09:33.070074] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73030 ] 00:13:41.057 [2024-07-25 13:09:33.208459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.316 [2024-07-25 13:09:33.315332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.316 [2024-07-25 13:09:33.375972] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:41.884 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:41.884 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:41.884 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jq1FUXD1rm 00:13:42.143 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:42.401 [2024-07-25 13:09:34.549719] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:42.660 nvme0n1 00:13:42.660 13:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:42.660 Running I/O for 1 seconds... 00:13:43.596 00:13:43.596 Latency(us) 00:13:43.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.596 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:43.596 Verification LBA range: start 0x0 length 0x2000 00:13:43.596 nvme0n1 : 1.01 4085.35 15.96 0.00 0.00 31075.22 4855.62 26214.40 00:13:43.596 =================================================================================================================== 00:13:43.596 Total : 4085.35 15.96 0.00 0.00 31075.22 4855.62 26214.40 00:13:43.596 0 00:13:43.596 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:13:43.596 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.596 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:43.856 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.856 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:13:43.856 "subsystems": [ 00:13:43.856 { 00:13:43.856 "subsystem": "keyring", 00:13:43.856 "config": [ 00:13:43.856 { 00:13:43.856 "method": "keyring_file_add_key", 00:13:43.856 "params": { 00:13:43.856 "name": "key0", 00:13:43.856 "path": "/tmp/tmp.jq1FUXD1rm" 00:13:43.856 } 00:13:43.856 } 00:13:43.856 ] 00:13:43.856 }, 00:13:43.856 { 00:13:43.856 "subsystem": "iobuf", 00:13:43.856 "config": [ 00:13:43.856 { 00:13:43.856 "method": "iobuf_set_options", 00:13:43.856 "params": { 00:13:43.856 "small_pool_count": 8192, 00:13:43.856 "large_pool_count": 1024, 00:13:43.856 "small_bufsize": 8192, 00:13:43.856 "large_bufsize": 135168 00:13:43.856 } 00:13:43.856 } 00:13:43.856 ] 00:13:43.856 }, 00:13:43.856 { 00:13:43.856 "subsystem": "sock", 00:13:43.856 "config": [ 00:13:43.856 { 00:13:43.856 "method": "sock_set_default_impl", 00:13:43.856 "params": { 00:13:43.856 "impl_name": "uring" 00:13:43.856 } 00:13:43.856 }, 00:13:43.856 { 00:13:43.856 "method": "sock_impl_set_options", 00:13:43.856 "params": { 00:13:43.856 "impl_name": "ssl", 00:13:43.856 "recv_buf_size": 4096, 00:13:43.856 "send_buf_size": 4096, 00:13:43.856 "enable_recv_pipe": true, 00:13:43.856 "enable_quickack": false, 00:13:43.856 "enable_placement_id": 0, 00:13:43.856 "enable_zerocopy_send_server": true, 00:13:43.856 "enable_zerocopy_send_client": false, 00:13:43.856 "zerocopy_threshold": 0, 00:13:43.856 "tls_version": 0, 00:13:43.856 "enable_ktls": false 00:13:43.856 } 00:13:43.856 }, 00:13:43.856 { 00:13:43.856 "method": "sock_impl_set_options", 00:13:43.856 "params": { 00:13:43.856 "impl_name": "posix", 00:13:43.856 "recv_buf_size": 2097152, 00:13:43.856 "send_buf_size": 2097152, 00:13:43.856 "enable_recv_pipe": true, 00:13:43.856 "enable_quickack": false, 00:13:43.856 "enable_placement_id": 0, 00:13:43.856 "enable_zerocopy_send_server": true, 00:13:43.856 "enable_zerocopy_send_client": false, 00:13:43.856 "zerocopy_threshold": 0, 00:13:43.856 "tls_version": 0, 00:13:43.856 "enable_ktls": false 00:13:43.856 } 00:13:43.856 }, 00:13:43.856 { 00:13:43.856 "method": "sock_impl_set_options", 00:13:43.856 "params": { 00:13:43.856 "impl_name": "uring", 00:13:43.856 "recv_buf_size": 2097152, 00:13:43.856 "send_buf_size": 2097152, 00:13:43.856 "enable_recv_pipe": true, 00:13:43.856 "enable_quickack": false, 00:13:43.856 "enable_placement_id": 0, 00:13:43.856 "enable_zerocopy_send_server": false, 00:13:43.856 "enable_zerocopy_send_client": false, 00:13:43.856 "zerocopy_threshold": 0, 00:13:43.856 "tls_version": 0, 00:13:43.856 "enable_ktls": false 00:13:43.856 } 00:13:43.856 } 00:13:43.856 ] 00:13:43.856 }, 00:13:43.856 { 00:13:43.856 "subsystem": "vmd", 00:13:43.856 "config": [] 00:13:43.856 }, 00:13:43.856 { 00:13:43.856 "subsystem": "accel", 00:13:43.856 "config": [ 00:13:43.856 { 00:13:43.856 "method": "accel_set_options", 00:13:43.856 "params": { 00:13:43.856 "small_cache_size": 128, 00:13:43.856 "large_cache_size": 16, 00:13:43.856 "task_count": 2048, 00:13:43.856 "sequence_count": 2048, 00:13:43.856 "buf_count": 2048 00:13:43.856 } 00:13:43.856 } 00:13:43.856 ] 00:13:43.856 }, 00:13:43.856 { 00:13:43.856 "subsystem": "bdev", 00:13:43.856 "config": [ 00:13:43.856 { 00:13:43.856 "method": "bdev_set_options", 00:13:43.856 "params": { 00:13:43.856 "bdev_io_pool_size": 65535, 00:13:43.856 "bdev_io_cache_size": 256, 00:13:43.856 "bdev_auto_examine": true, 00:13:43.856 "iobuf_small_cache_size": 128, 00:13:43.856 "iobuf_large_cache_size": 16 00:13:43.856 } 00:13:43.856 }, 00:13:43.856 { 00:13:43.856 "method": "bdev_raid_set_options", 00:13:43.856 "params": { 00:13:43.856 "process_window_size_kb": 1024, 00:13:43.856 "process_max_bandwidth_mb_sec": 0 00:13:43.856 } 00:13:43.856 }, 00:13:43.856 { 00:13:43.856 "method": "bdev_iscsi_set_options", 00:13:43.856 "params": { 00:13:43.856 "timeout_sec": 30 00:13:43.856 } 00:13:43.856 }, 00:13:43.856 { 00:13:43.856 "method": "bdev_nvme_set_options", 00:13:43.856 "params": { 00:13:43.856 "action_on_timeout": "none", 00:13:43.856 "timeout_us": 0, 00:13:43.856 "timeout_admin_us": 0, 00:13:43.856 "keep_alive_timeout_ms": 10000, 00:13:43.856 "arbitration_burst": 0, 00:13:43.856 "low_priority_weight": 0, 00:13:43.856 "medium_priority_weight": 0, 00:13:43.856 "high_priority_weight": 0, 00:13:43.856 "nvme_adminq_poll_period_us": 10000, 00:13:43.856 "nvme_ioq_poll_period_us": 0, 00:13:43.856 "io_queue_requests": 0, 00:13:43.856 "delay_cmd_submit": true, 00:13:43.856 "transport_retry_count": 4, 00:13:43.856 "bdev_retry_count": 3, 00:13:43.856 "transport_ack_timeout": 0, 00:13:43.856 "ctrlr_loss_timeout_sec": 0, 00:13:43.857 "reconnect_delay_sec": 0, 00:13:43.857 "fast_io_fail_timeout_sec": 0, 00:13:43.857 "disable_auto_failback": false, 00:13:43.857 "generate_uuids": false, 00:13:43.857 "transport_tos": 0, 00:13:43.857 "nvme_error_stat": false, 00:13:43.857 "rdma_srq_size": 0, 00:13:43.857 "io_path_stat": false, 00:13:43.857 "allow_accel_sequence": false, 00:13:43.857 "rdma_max_cq_size": 0, 00:13:43.857 "rdma_cm_event_timeout_ms": 0, 00:13:43.857 "dhchap_digests": [ 00:13:43.857 "sha256", 00:13:43.857 "sha384", 00:13:43.857 "sha512" 00:13:43.857 ], 00:13:43.857 "dhchap_dhgroups": [ 00:13:43.857 "null", 00:13:43.857 "ffdhe2048", 00:13:43.857 "ffdhe3072", 00:13:43.857 "ffdhe4096", 00:13:43.857 "ffdhe6144", 00:13:43.857 "ffdhe8192" 00:13:43.857 ] 00:13:43.857 } 00:13:43.857 }, 00:13:43.857 { 00:13:43.857 "method": "bdev_nvme_set_hotplug", 00:13:43.857 "params": { 00:13:43.857 "period_us": 100000, 00:13:43.857 "enable": false 00:13:43.857 } 00:13:43.857 }, 00:13:43.857 { 00:13:43.857 "method": "bdev_malloc_create", 00:13:43.857 "params": { 00:13:43.857 "name": "malloc0", 00:13:43.857 "num_blocks": 8192, 00:13:43.857 "block_size": 4096, 00:13:43.857 "physical_block_size": 4096, 00:13:43.857 "uuid": "fdae95f6-a28a-4e6a-a57f-fc80ce55a9b3", 00:13:43.857 "optimal_io_boundary": 0, 00:13:43.857 "md_size": 0, 00:13:43.857 "dif_type": 0, 00:13:43.857 "dif_is_head_of_md": false, 00:13:43.857 "dif_pi_format": 0 00:13:43.857 } 00:13:43.857 }, 00:13:43.857 { 00:13:43.857 "method": "bdev_wait_for_examine" 00:13:43.857 } 00:13:43.857 ] 00:13:43.857 }, 00:13:43.857 { 00:13:43.857 "subsystem": "nbd", 00:13:43.857 "config": [] 00:13:43.857 }, 00:13:43.857 { 00:13:43.857 "subsystem": "scheduler", 00:13:43.857 "config": [ 00:13:43.857 { 00:13:43.857 "method": "framework_set_scheduler", 00:13:43.857 "params": { 00:13:43.857 "name": "static" 00:13:43.857 } 00:13:43.857 } 00:13:43.857 ] 00:13:43.857 }, 00:13:43.857 { 00:13:43.857 "subsystem": "nvmf", 00:13:43.857 "config": [ 00:13:43.857 { 00:13:43.857 "method": "nvmf_set_config", 00:13:43.857 "params": { 00:13:43.857 "discovery_filter": "match_any", 00:13:43.857 "admin_cmd_passthru": { 00:13:43.857 "identify_ctrlr": false 00:13:43.857 } 00:13:43.857 } 00:13:43.857 }, 00:13:43.857 { 00:13:43.857 "method": "nvmf_set_max_subsystems", 00:13:43.857 "params": { 00:13:43.857 "max_subsystems": 1024 00:13:43.857 } 00:13:43.857 }, 00:13:43.857 { 00:13:43.857 "method": "nvmf_set_crdt", 00:13:43.857 "params": { 00:13:43.857 "crdt1": 0, 00:13:43.857 "crdt2": 0, 00:13:43.857 "crdt3": 0 00:13:43.857 } 00:13:43.857 }, 00:13:43.857 { 00:13:43.857 "method": "nvmf_create_transport", 00:13:43.857 "params": { 00:13:43.857 "trtype": "TCP", 00:13:43.857 "max_queue_depth": 128, 00:13:43.857 "max_io_qpairs_per_ctrlr": 127, 00:13:43.857 "in_capsule_data_size": 4096, 00:13:43.857 "max_io_size": 131072, 00:13:43.857 "io_unit_size": 131072, 00:13:43.857 "max_aq_depth": 128, 00:13:43.857 "num_shared_buffers": 511, 00:13:43.857 "buf_cache_size": 4294967295, 00:13:43.857 "dif_insert_or_strip": false, 00:13:43.857 "zcopy": false, 00:13:43.857 "c2h_success": false, 00:13:43.857 "sock_priority": 0, 00:13:43.857 "abort_timeout_sec": 1, 00:13:43.857 "ack_timeout": 0, 00:13:43.857 "data_wr_pool_size": 0 00:13:43.857 } 00:13:43.857 }, 00:13:43.857 { 00:13:43.857 "method": "nvmf_create_subsystem", 00:13:43.857 "params": { 00:13:43.857 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:43.857 "allow_any_host": false, 00:13:43.857 "serial_number": "00000000000000000000", 00:13:43.857 "model_number": "SPDK bdev Controller", 00:13:43.857 "max_namespaces": 32, 00:13:43.857 "min_cntlid": 1, 00:13:43.857 "max_cntlid": 65519, 00:13:43.857 "ana_reporting": false 00:13:43.857 } 00:13:43.857 }, 00:13:43.857 { 00:13:43.857 "method": "nvmf_subsystem_add_host", 00:13:43.857 "params": { 00:13:43.857 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:43.857 "host": "nqn.2016-06.io.spdk:host1", 00:13:43.857 "psk": "key0" 00:13:43.857 } 00:13:43.857 }, 00:13:43.857 { 00:13:43.857 "method": "nvmf_subsystem_add_ns", 00:13:43.857 "params": { 00:13:43.857 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:43.857 "namespace": { 00:13:43.857 "nsid": 1, 00:13:43.857 "bdev_name": "malloc0", 00:13:43.857 "nguid": "FDAE95F6A28A4E6AA57FFC80CE55A9B3", 00:13:43.857 "uuid": "fdae95f6-a28a-4e6a-a57f-fc80ce55a9b3", 00:13:43.857 "no_auto_visible": false 00:13:43.857 } 00:13:43.857 } 00:13:43.857 }, 00:13:43.857 { 00:13:43.857 "method": "nvmf_subsystem_add_listener", 00:13:43.857 "params": { 00:13:43.857 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:43.857 "listen_address": { 00:13:43.857 "trtype": "TCP", 00:13:43.857 "adrfam": "IPv4", 00:13:43.857 "traddr": "10.0.0.2", 00:13:43.857 "trsvcid": "4420" 00:13:43.857 }, 00:13:43.857 "secure_channel": false, 00:13:43.857 "sock_impl": "ssl" 00:13:43.857 } 00:13:43.857 } 00:13:43.857 ] 00:13:43.857 } 00:13:43.857 ] 00:13:43.857 }' 00:13:43.857 13:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:44.116 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:13:44.116 "subsystems": [ 00:13:44.116 { 00:13:44.116 "subsystem": "keyring", 00:13:44.116 "config": [ 00:13:44.116 { 00:13:44.116 "method": "keyring_file_add_key", 00:13:44.116 "params": { 00:13:44.116 "name": "key0", 00:13:44.116 "path": "/tmp/tmp.jq1FUXD1rm" 00:13:44.116 } 00:13:44.116 } 00:13:44.116 ] 00:13:44.116 }, 00:13:44.116 { 00:13:44.116 "subsystem": "iobuf", 00:13:44.116 "config": [ 00:13:44.116 { 00:13:44.116 "method": "iobuf_set_options", 00:13:44.116 "params": { 00:13:44.116 "small_pool_count": 8192, 00:13:44.116 "large_pool_count": 1024, 00:13:44.116 "small_bufsize": 8192, 00:13:44.117 "large_bufsize": 135168 00:13:44.117 } 00:13:44.117 } 00:13:44.117 ] 00:13:44.117 }, 00:13:44.117 { 00:13:44.117 "subsystem": "sock", 00:13:44.117 "config": [ 00:13:44.117 { 00:13:44.117 "method": "sock_set_default_impl", 00:13:44.117 "params": { 00:13:44.117 "impl_name": "uring" 00:13:44.117 } 00:13:44.117 }, 00:13:44.117 { 00:13:44.117 "method": "sock_impl_set_options", 00:13:44.117 "params": { 00:13:44.117 "impl_name": "ssl", 00:13:44.117 "recv_buf_size": 4096, 00:13:44.117 "send_buf_size": 4096, 00:13:44.117 "enable_recv_pipe": true, 00:13:44.117 "enable_quickack": false, 00:13:44.117 "enable_placement_id": 0, 00:13:44.117 "enable_zerocopy_send_server": true, 00:13:44.117 "enable_zerocopy_send_client": false, 00:13:44.117 "zerocopy_threshold": 0, 00:13:44.117 "tls_version": 0, 00:13:44.117 "enable_ktls": false 00:13:44.117 } 00:13:44.117 }, 00:13:44.117 { 00:13:44.117 "method": "sock_impl_set_options", 00:13:44.117 "params": { 00:13:44.117 "impl_name": "posix", 00:13:44.117 "recv_buf_size": 2097152, 00:13:44.117 "send_buf_size": 2097152, 00:13:44.117 "enable_recv_pipe": true, 00:13:44.117 "enable_quickack": false, 00:13:44.117 "enable_placement_id": 0, 00:13:44.117 "enable_zerocopy_send_server": true, 00:13:44.117 "enable_zerocopy_send_client": false, 00:13:44.117 "zerocopy_threshold": 0, 00:13:44.117 "tls_version": 0, 00:13:44.117 "enable_ktls": false 00:13:44.117 } 00:13:44.117 }, 00:13:44.117 { 00:13:44.117 "method": "sock_impl_set_options", 00:13:44.117 "params": { 00:13:44.117 "impl_name": "uring", 00:13:44.117 "recv_buf_size": 2097152, 00:13:44.117 "send_buf_size": 2097152, 00:13:44.117 "enable_recv_pipe": true, 00:13:44.117 "enable_quickack": false, 00:13:44.117 "enable_placement_id": 0, 00:13:44.117 "enable_zerocopy_send_server": false, 00:13:44.117 "enable_zerocopy_send_client": false, 00:13:44.117 "zerocopy_threshold": 0, 00:13:44.117 "tls_version": 0, 00:13:44.117 "enable_ktls": false 00:13:44.117 } 00:13:44.117 } 00:13:44.117 ] 00:13:44.117 }, 00:13:44.117 { 00:13:44.117 "subsystem": "vmd", 00:13:44.117 "config": [] 00:13:44.117 }, 00:13:44.117 { 00:13:44.117 "subsystem": "accel", 00:13:44.117 "config": [ 00:13:44.117 { 00:13:44.117 "method": "accel_set_options", 00:13:44.117 "params": { 00:13:44.117 "small_cache_size": 128, 00:13:44.117 "large_cache_size": 16, 00:13:44.117 "task_count": 2048, 00:13:44.117 "sequence_count": 2048, 00:13:44.117 "buf_count": 2048 00:13:44.117 } 00:13:44.117 } 00:13:44.117 ] 00:13:44.117 }, 00:13:44.117 { 00:13:44.117 "subsystem": "bdev", 00:13:44.117 "config": [ 00:13:44.117 { 00:13:44.117 "method": "bdev_set_options", 00:13:44.117 "params": { 00:13:44.117 "bdev_io_pool_size": 65535, 00:13:44.117 "bdev_io_cache_size": 256, 00:13:44.117 "bdev_auto_examine": true, 00:13:44.117 "iobuf_small_cache_size": 128, 00:13:44.117 "iobuf_large_cache_size": 16 00:13:44.117 } 00:13:44.117 }, 00:13:44.117 { 00:13:44.117 "method": "bdev_raid_set_options", 00:13:44.117 "params": { 00:13:44.117 "process_window_size_kb": 1024, 00:13:44.117 "process_max_bandwidth_mb_sec": 0 00:13:44.117 } 00:13:44.117 }, 00:13:44.117 { 00:13:44.117 "method": "bdev_iscsi_set_options", 00:13:44.117 "params": { 00:13:44.117 "timeout_sec": 30 00:13:44.117 } 00:13:44.117 }, 00:13:44.117 { 00:13:44.117 "method": "bdev_nvme_set_options", 00:13:44.117 "params": { 00:13:44.117 "action_on_timeout": "none", 00:13:44.117 "timeout_us": 0, 00:13:44.117 "timeout_admin_us": 0, 00:13:44.117 "keep_alive_timeout_ms": 10000, 00:13:44.117 "arbitration_burst": 0, 00:13:44.117 "low_priority_weight": 0, 00:13:44.117 "medium_priority_weight": 0, 00:13:44.117 "high_priority_weight": 0, 00:13:44.117 "nvme_adminq_poll_period_us": 10000, 00:13:44.117 "nvme_ioq_poll_period_us": 0, 00:13:44.117 "io_queue_requests": 512, 00:13:44.117 "delay_cmd_submit": true, 00:13:44.117 "transport_retry_count": 4, 00:13:44.117 "bdev_retry_count": 3, 00:13:44.117 "transport_ack_timeout": 0, 00:13:44.117 "ctrlr_loss_timeout_sec": 0, 00:13:44.117 "reconnect_delay_sec": 0, 00:13:44.117 "fast_io_fail_timeout_sec": 0, 00:13:44.117 "disable_auto_failback": false, 00:13:44.117 "generate_uuids": false, 00:13:44.117 "transport_tos": 0, 00:13:44.117 "nvme_error_stat": false, 00:13:44.117 "rdma_srq_size": 0, 00:13:44.117 "io_path_stat": false, 00:13:44.117 "allow_accel_sequence": false, 00:13:44.117 "rdma_max_cq_size": 0, 00:13:44.117 "rdma_cm_event_timeout_ms": 0, 00:13:44.117 "dhchap_digests": [ 00:13:44.117 "sha256", 00:13:44.117 "sha384", 00:13:44.117 "sha512" 00:13:44.117 ], 00:13:44.117 "dhchap_dhgroups": [ 00:13:44.117 "null", 00:13:44.117 "ffdhe2048", 00:13:44.117 "ffdhe3072", 00:13:44.117 "ffdhe4096", 00:13:44.117 "ffdhe6144", 00:13:44.117 "ffdhe8192" 00:13:44.117 ] 00:13:44.117 } 00:13:44.117 }, 00:13:44.117 { 00:13:44.117 "method": "bdev_nvme_attach_controller", 00:13:44.117 "params": { 00:13:44.117 "name": "nvme0", 00:13:44.117 "trtype": "TCP", 00:13:44.117 "adrfam": "IPv4", 00:13:44.117 "traddr": "10.0.0.2", 00:13:44.117 "trsvcid": "4420", 00:13:44.117 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:44.117 "prchk_reftag": false, 00:13:44.117 "prchk_guard": false, 00:13:44.117 "ctrlr_loss_timeout_sec": 0, 00:13:44.117 "reconnect_delay_sec": 0, 00:13:44.117 "fast_io_fail_timeout_sec": 0, 00:13:44.117 "psk": "key0", 00:13:44.117 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:44.117 "hdgst": false, 00:13:44.117 "ddgst": false 00:13:44.117 } 00:13:44.117 }, 00:13:44.117 { 00:13:44.117 "method": "bdev_nvme_set_hotplug", 00:13:44.117 "params": { 00:13:44.117 "period_us": 100000, 00:13:44.117 "enable": false 00:13:44.117 } 00:13:44.117 }, 00:13:44.117 { 00:13:44.117 "method": "bdev_enable_histogram", 00:13:44.117 "params": { 00:13:44.117 "name": "nvme0n1", 00:13:44.117 "enable": true 00:13:44.117 } 00:13:44.117 }, 00:13:44.117 { 00:13:44.117 "method": "bdev_wait_for_examine" 00:13:44.117 } 00:13:44.117 ] 00:13:44.117 }, 00:13:44.117 { 00:13:44.117 "subsystem": "nbd", 00:13:44.117 "config": [] 00:13:44.117 } 00:13:44.117 ] 00:13:44.117 }' 00:13:44.117 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 73030 00:13:44.117 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73030 ']' 00:13:44.117 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73030 00:13:44.117 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:44.117 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:44.117 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73030 00:13:44.117 killing process with pid 73030 00:13:44.117 Received shutdown signal, test time was about 1.000000 seconds 00:13:44.117 00:13:44.117 Latency(us) 00:13:44.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.117 =================================================================================================================== 00:13:44.117 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:44.117 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:44.117 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:44.117 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73030' 00:13:44.117 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73030 00:13:44.117 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73030 00:13:44.376 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 72998 00:13:44.376 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72998 ']' 00:13:44.376 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72998 00:13:44.376 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:44.377 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:44.377 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72998 00:13:44.377 killing process with pid 72998 00:13:44.377 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:44.377 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:44.377 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72998' 00:13:44.377 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72998 00:13:44.377 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72998 00:13:44.636 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:13:44.636 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:44.636 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:44.636 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:13:44.636 "subsystems": [ 00:13:44.636 { 00:13:44.636 "subsystem": "keyring", 00:13:44.636 "config": [ 00:13:44.636 { 00:13:44.636 "method": "keyring_file_add_key", 00:13:44.636 "params": { 00:13:44.636 "name": "key0", 00:13:44.636 "path": "/tmp/tmp.jq1FUXD1rm" 00:13:44.636 } 00:13:44.636 } 00:13:44.636 ] 00:13:44.636 }, 00:13:44.636 { 00:13:44.636 "subsystem": "iobuf", 00:13:44.636 "config": [ 00:13:44.636 { 00:13:44.636 "method": "iobuf_set_options", 00:13:44.636 "params": { 00:13:44.636 "small_pool_count": 8192, 00:13:44.636 "large_pool_count": 1024, 00:13:44.636 "small_bufsize": 8192, 00:13:44.636 "large_bufsize": 135168 00:13:44.636 } 00:13:44.636 } 00:13:44.636 ] 00:13:44.636 }, 00:13:44.636 { 00:13:44.636 "subsystem": "sock", 00:13:44.636 "config": [ 00:13:44.636 { 00:13:44.637 "method": "sock_set_default_impl", 00:13:44.637 "params": { 00:13:44.637 "impl_name": "uring" 00:13:44.637 } 00:13:44.637 }, 00:13:44.637 { 00:13:44.637 "method": "sock_impl_set_options", 00:13:44.637 "params": { 00:13:44.637 "impl_name": "ssl", 00:13:44.637 "recv_buf_size": 4096, 00:13:44.637 "send_buf_size": 4096, 00:13:44.637 "enable_recv_pipe": true, 00:13:44.637 "enable_quickack": false, 00:13:44.637 "enable_placement_id": 0, 00:13:44.637 "enable_zerocopy_send_server": true, 00:13:44.637 "enable_zerocopy_send_client": false, 00:13:44.637 "zerocopy_threshold": 0, 00:13:44.637 "tls_version": 0, 00:13:44.637 "enable_ktls": false 00:13:44.637 } 00:13:44.637 }, 00:13:44.637 { 00:13:44.637 "method": "sock_impl_set_options", 00:13:44.637 "params": { 00:13:44.637 "impl_name": "posix", 00:13:44.637 "recv_buf_size": 2097152, 00:13:44.637 "send_buf_size": 2097152, 00:13:44.637 "enable_recv_pipe": true, 00:13:44.637 "enable_quickack": false, 00:13:44.637 "enable_placement_id": 0, 00:13:44.637 "enable_zerocopy_send_server": true, 00:13:44.637 "enable_zerocopy_send_client": false, 00:13:44.637 "zerocopy_threshold": 0, 00:13:44.637 "tls_version": 0, 00:13:44.637 "enable_ktls": false 00:13:44.637 } 00:13:44.637 }, 00:13:44.637 { 00:13:44.637 "method": "sock_impl_set_options", 00:13:44.637 "params": { 00:13:44.637 "impl_name": "uring", 00:13:44.637 "recv_buf_size": 2097152, 00:13:44.637 "send_buf_size": 2097152, 00:13:44.637 "enable_recv_pipe": true, 00:13:44.637 "enable_quickack": false, 00:13:44.637 "enable_placement_id": 0, 00:13:44.637 "enable_zerocopy_send_server": false, 00:13:44.637 "enable_zerocopy_send_client": false, 00:13:44.637 "zerocopy_threshold": 0, 00:13:44.637 "tls_version": 0, 00:13:44.637 "enable_ktls": false 00:13:44.637 } 00:13:44.637 } 00:13:44.637 ] 00:13:44.637 }, 00:13:44.637 { 00:13:44.637 "subsystem": "vmd", 00:13:44.637 "config": [] 00:13:44.637 }, 00:13:44.637 { 00:13:44.637 "subsystem": "accel", 00:13:44.637 "config": [ 00:13:44.637 { 00:13:44.637 "method": "accel_set_options", 00:13:44.637 "params": { 00:13:44.637 "small_cache_size": 128, 00:13:44.637 "large_cache_size": 16, 00:13:44.637 "task_count": 2048, 00:13:44.637 "sequence_count": 2048, 00:13:44.637 "buf_count": 2048 00:13:44.637 } 00:13:44.637 } 00:13:44.637 ] 00:13:44.637 }, 00:13:44.637 { 00:13:44.637 "subsystem": "bdev", 00:13:44.637 "config": [ 00:13:44.637 { 00:13:44.637 "method": "bdev_set_options", 00:13:44.637 "params": { 00:13:44.637 "bdev_io_pool_size": 65535, 00:13:44.637 "bdev_io_cache_size": 256, 00:13:44.637 "bdev_auto_examine": true, 00:13:44.637 "iobuf_small_cache_size": 128, 00:13:44.637 "iobuf_large_cache_size": 16 00:13:44.637 } 00:13:44.637 }, 00:13:44.637 { 00:13:44.637 "method": "bdev_raid_set_options", 00:13:44.637 "params": { 00:13:44.637 "process_window_size_kb": 1024, 00:13:44.637 "process_max_bandwidth_mb_sec": 0 00:13:44.637 } 00:13:44.637 }, 00:13:44.637 { 00:13:44.637 "method": "bdev_iscsi_set_options", 00:13:44.637 "params": { 00:13:44.637 "timeout_sec": 30 00:13:44.637 } 00:13:44.637 }, 00:13:44.637 { 00:13:44.637 "method": "bdev_nvme_set_options", 00:13:44.637 "params": { 00:13:44.637 "action_on_timeout": "none", 00:13:44.637 "timeout_us": 0, 00:13:44.637 "timeout_admin_us": 0, 00:13:44.637 "keep_alive_timeout_ms": 10000, 00:13:44.637 "arbitration_burst": 0, 00:13:44.637 "low_priority_weight": 0, 00:13:44.637 "medium_priority_weight": 0, 00:13:44.637 "high_priority_weight": 0, 00:13:44.637 "nvme_adminq_poll_period_us": 10000, 00:13:44.637 "nvme_ioq_poll_period_us": 0, 00:13:44.637 "io_queue_requests": 0, 00:13:44.637 "delay_cmd_submit": true, 00:13:44.637 "transport_retry_count": 4, 00:13:44.637 "bdev_retry_count": 3, 00:13:44.637 "transport_ack_timeout": 0, 00:13:44.637 "ctrlr_loss_timeout_sec": 0, 00:13:44.637 "reconnect_delay_sec": 0, 00:13:44.637 "fast_io_fail_timeout_sec": 0, 00:13:44.637 "disable_auto_failback": false, 00:13:44.637 "generate_uuids": false, 00:13:44.637 "transport_tos": 0, 00:13:44.637 "nvme_error_stat": false, 00:13:44.637 "rdma_srq_size": 0, 00:13:44.637 "io_path_stat": false, 00:13:44.637 "allow_accel_sequence": false, 00:13:44.637 "rdma_max_cq_size": 0, 00:13:44.637 "rdma_cm_event_timeout_ms": 0, 00:13:44.637 "dhchap_digests": [ 00:13:44.637 "sha256", 00:13:44.637 "sha384", 00:13:44.637 "sha512" 00:13:44.637 ], 00:13:44.637 "dhchap_dhgroups": [ 00:13:44.637 "null", 00:13:44.637 "ffdhe2048", 00:13:44.637 "ffdhe3072", 00:13:44.637 "ffdhe4096", 00:13:44.637 "ffdhe6144", 00:13:44.637 "ffdhe8192" 00:13:44.637 ] 00:13:44.637 } 00:13:44.637 }, 00:13:44.637 { 00:13:44.637 "method": "bdev_nvme_set_hotplug", 00:13:44.637 "params": { 00:13:44.637 "period_us": 100000, 00:13:44.637 "enable": false 00:13:44.637 } 00:13:44.637 }, 00:13:44.637 { 00:13:44.637 "method": "bdev_malloc_create", 00:13:44.637 "params": { 00:13:44.637 "name": "malloc0", 00:13:44.637 "num_blocks": 8192, 00:13:44.637 "block_size": 4096, 00:13:44.637 "physical_block_size": 4096, 00:13:44.637 "uuid": "fdae95f6-a28a-4e6a-a57f-fc80ce55a9b3", 00:13:44.637 "optimal_io_boundary": 0, 00:13:44.637 "md_size": 0, 00:13:44.637 "dif_type": 0, 00:13:44.637 "dif_is_head_of_md": false, 00:13:44.637 "dif_pi_format": 0 00:13:44.637 } 00:13:44.637 }, 00:13:44.637 { 00:13:44.637 "method": "bdev_wait_for_examine" 00:13:44.637 } 00:13:44.637 ] 00:13:44.637 }, 00:13:44.637 { 00:13:44.637 "subsystem": "nbd", 00:13:44.637 "config": [] 00:13:44.637 }, 00:13:44.637 { 00:13:44.637 "subsystem": "scheduler", 00:13:44.637 "config": [ 00:13:44.637 { 00:13:44.637 "method": "framework_set_scheduler", 00:13:44.637 "params": { 00:13:44.637 "name": "static" 00:13:44.637 } 00:13:44.637 } 00:13:44.637 ] 00:13:44.637 }, 00:13:44.637 { 00:13:44.637 "subsystem": "nvmf", 00:13:44.637 "config": [ 00:13:44.637 { 00:13:44.637 "method": "nvmf_set_config", 00:13:44.637 "params": { 00:13:44.637 "discovery_filter": "match_any", 00:13:44.637 "admin_cmd_passthru": { 00:13:44.637 "identify_ctrlr": false 00:13:44.637 } 00:13:44.637 } 00:13:44.637 }, 00:13:44.637 { 00:13:44.637 "method": "nvmf_set_max_subsystems", 00:13:44.637 "params": { 00:13:44.637 "max_subsystems": 1024 00:13:44.637 } 00:13:44.637 }, 00:13:44.637 { 00:13:44.637 "method": "nvmf_set_crdt", 00:13:44.637 "params": { 00:13:44.637 "crdt1": 0, 00:13:44.637 "crdt2": 0, 00:13:44.637 "crdt3": 0 00:13:44.637 } 00:13:44.637 }, 00:13:44.637 { 00:13:44.637 "method": "nvmf_create_transport", 00:13:44.637 "params": { 00:13:44.637 "trtype": "TCP", 00:13:44.637 "max_queue_depth": 128, 00:13:44.637 "max_io_qpairs_per_ctrlr": 127, 00:13:44.637 "in_capsule_data_size": 4096, 00:13:44.637 "max_io_size": 131072, 00:13:44.637 "io_unit_size": 131072, 00:13:44.637 "max_aq_depth": 128, 00:13:44.637 "num_shared_buffers": 511, 00:13:44.637 "buf_cache_size": 4294967295, 00:13:44.637 "dif_insert_or_strip": false, 00:13:44.637 "zcopy": false, 00:13:44.637 "c2h_success": false, 00:13:44.637 "sock_priority": 0, 00:13:44.637 "abort_timeout_sec": 1, 00:13:44.637 "ack_timeout": 0, 00:13:44.637 "data_wr_pool_size": 0 00:13:44.637 } 00:13:44.637 }, 00:13:44.637 { 00:13:44.637 "method": "nvmf_create_subsystem", 00:13:44.637 "params": { 00:13:44.637 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:44.637 "allow_any_host": false, 00:13:44.637 "serial_number": "00000000000000000000", 00:13:44.637 "model_number": "SPDK bdev Controller", 00:13:44.637 "max_namespaces": 32, 00:13:44.637 "min_cntlid": 1, 00:13:44.637 "max_cntlid": 65519, 00:13:44.637 "ana_reporting": false 00:13:44.637 } 00:13:44.637 }, 00:13:44.637 { 00:13:44.637 "method": "nvmf_subsystem_add_host", 00:13:44.637 "params": { 00:13:44.637 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:44.637 "host": "nqn.2016-06.io.spdk:host1", 00:13:44.637 "psk": "key0" 00:13:44.637 } 00:13:44.637 }, 00:13:44.637 { 00:13:44.637 "method": "nvmf_subsystem_add_ns", 00:13:44.637 "params": { 00:13:44.637 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:44.637 "namespace": { 00:13:44.637 "nsid": 1, 00:13:44.637 "bdev_name": "malloc0", 00:13:44.637 "nguid": "FDAE95F6A28A4E6AA57FFC80CE55A9B3", 00:13:44.637 "uuid": "fdae95f6-a28a-4e6a-a57f-fc80ce55a9b3", 00:13:44.637 "no_auto_visible": false 00:13:44.637 } 00:13:44.637 } 00:13:44.637 }, 00:13:44.637 { 00:13:44.637 "method": "nvmf_subsystem_add_listener", 00:13:44.637 "params": { 00:13:44.637 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:44.637 "listen_address": { 00:13:44.638 "trtype": "TCP", 00:13:44.638 "adrfam": "IPv4", 00:13:44.638 "traddr": "10.0.0.2", 00:13:44.638 "trsvcid": "4420" 00:13:44.638 }, 00:13:44.638 "secure_channel": false, 00:13:44.638 "sock_impl": "ssl" 00:13:44.638 } 00:13:44.638 } 00:13:44.638 ] 00:13:44.638 } 00:13:44.638 ] 00:13:44.638 }' 00:13:44.638 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:44.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.638 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73085 00:13:44.638 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73085 00:13:44.638 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:13:44.638 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73085 ']' 00:13:44.638 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.638 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:44.638 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.638 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:44.638 13:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:44.638 [2024-07-25 13:09:36.764755] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:44.638 [2024-07-25 13:09:36.764860] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.897 [2024-07-25 13:09:36.899743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.897 [2024-07-25 13:09:36.985201] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.897 [2024-07-25 13:09:36.985290] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.897 [2024-07-25 13:09:36.985317] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:44.897 [2024-07-25 13:09:36.985325] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:44.897 [2024-07-25 13:09:36.985332] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.897 [2024-07-25 13:09:36.985397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.155 [2024-07-25 13:09:37.148718] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:45.155 [2024-07-25 13:09:37.219843] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.155 [2024-07-25 13:09:37.251821] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:45.155 [2024-07-25 13:09:37.260014] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.723 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:45.723 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:45.723 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:45.723 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:45.723 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:45.723 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.723 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=73117 00:13:45.723 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:13:45.723 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:13:45.723 "subsystems": [ 00:13:45.723 { 00:13:45.723 "subsystem": "keyring", 00:13:45.723 "config": [ 00:13:45.723 { 00:13:45.723 "method": "keyring_file_add_key", 00:13:45.723 "params": { 00:13:45.723 "name": "key0", 00:13:45.723 "path": "/tmp/tmp.jq1FUXD1rm" 00:13:45.723 } 00:13:45.723 } 00:13:45.723 ] 00:13:45.723 }, 00:13:45.723 { 00:13:45.723 "subsystem": "iobuf", 00:13:45.723 "config": [ 00:13:45.723 { 00:13:45.723 "method": "iobuf_set_options", 00:13:45.723 "params": { 00:13:45.723 "small_pool_count": 8192, 00:13:45.723 "large_pool_count": 1024, 00:13:45.723 "small_bufsize": 8192, 00:13:45.723 "large_bufsize": 135168 00:13:45.723 } 00:13:45.723 } 00:13:45.723 ] 00:13:45.723 }, 00:13:45.723 { 00:13:45.723 "subsystem": "sock", 00:13:45.723 "config": [ 00:13:45.723 { 00:13:45.723 "method": "sock_set_default_impl", 00:13:45.723 "params": { 00:13:45.723 "impl_name": "uring" 00:13:45.723 } 00:13:45.723 }, 00:13:45.723 { 00:13:45.723 "method": "sock_impl_set_options", 00:13:45.723 "params": { 00:13:45.723 "impl_name": "ssl", 00:13:45.723 "recv_buf_size": 4096, 00:13:45.723 "send_buf_size": 4096, 00:13:45.723 "enable_recv_pipe": true, 00:13:45.723 "enable_quickack": false, 00:13:45.723 "enable_placement_id": 0, 00:13:45.723 "enable_zerocopy_send_server": true, 00:13:45.723 "enable_zerocopy_send_client": false, 00:13:45.723 "zerocopy_threshold": 0, 00:13:45.723 "tls_version": 0, 00:13:45.723 "enable_ktls": false 00:13:45.723 } 00:13:45.723 }, 00:13:45.724 { 00:13:45.724 "method": "sock_impl_set_options", 00:13:45.724 "params": { 00:13:45.724 "impl_name": "posix", 00:13:45.724 "recv_buf_size": 2097152, 00:13:45.724 "send_buf_size": 2097152, 00:13:45.724 "enable_recv_pipe": true, 00:13:45.724 "enable_quickack": false, 00:13:45.724 "enable_placement_id": 0, 00:13:45.724 "enable_zerocopy_send_server": true, 00:13:45.724 "enable_zerocopy_send_client": false, 00:13:45.724 "zerocopy_threshold": 0, 00:13:45.724 "tls_version": 0, 00:13:45.724 "enable_ktls": false 00:13:45.724 } 00:13:45.724 }, 00:13:45.724 { 00:13:45.724 "method": "sock_impl_set_options", 00:13:45.724 "params": { 00:13:45.724 "impl_name": "uring", 00:13:45.724 "recv_buf_size": 2097152, 00:13:45.724 "send_buf_size": 2097152, 00:13:45.724 "enable_recv_pipe": true, 00:13:45.724 "enable_quickack": false, 00:13:45.724 "enable_placement_id": 0, 00:13:45.724 "enable_zerocopy_send_server": false, 00:13:45.724 "enable_zerocopy_send_client": false, 00:13:45.724 "zerocopy_threshold": 0, 00:13:45.724 "tls_version": 0, 00:13:45.724 "enable_ktls": false 00:13:45.724 } 00:13:45.724 } 00:13:45.724 ] 00:13:45.724 }, 00:13:45.724 { 00:13:45.724 "subsystem": "vmd", 00:13:45.724 "config": [] 00:13:45.724 }, 00:13:45.724 { 00:13:45.724 "subsystem": "accel", 00:13:45.724 "config": [ 00:13:45.724 { 00:13:45.724 "method": "accel_set_options", 00:13:45.724 "params": { 00:13:45.724 "small_cache_size": 128, 00:13:45.724 "large_cache_size": 16, 00:13:45.724 "task_count": 2048, 00:13:45.724 "sequence_count": 2048, 00:13:45.724 "buf_count": 2048 00:13:45.724 } 00:13:45.724 } 00:13:45.724 ] 00:13:45.724 }, 00:13:45.724 { 00:13:45.724 "subsystem": "bdev", 00:13:45.724 "config": [ 00:13:45.724 { 00:13:45.724 "method": "bdev_set_options", 00:13:45.724 "params": { 00:13:45.724 "bdev_io_pool_size": 65535, 00:13:45.724 "bdev_io_cache_size": 256, 00:13:45.724 "bdev_auto_examine": true, 00:13:45.724 "iobuf_small_cache_size": 128, 00:13:45.724 "iobuf_large_cache_size": 16 00:13:45.724 } 00:13:45.724 }, 00:13:45.724 { 00:13:45.724 "method": "bdev_raid_set_options", 00:13:45.724 "params": { 00:13:45.724 "process_window_size_kb": 1024, 00:13:45.724 "process_max_bandwidth_mb_sec": 0 00:13:45.724 } 00:13:45.724 }, 00:13:45.724 { 00:13:45.724 "method": "bdev_iscsi_set_options", 00:13:45.724 "params": { 00:13:45.724 "timeout_sec": 30 00:13:45.724 } 00:13:45.724 }, 00:13:45.724 { 00:13:45.724 "method": "bdev_nvme_set_options", 00:13:45.724 "params": { 00:13:45.724 "action_on_timeout": "none", 00:13:45.724 "timeout_us": 0, 00:13:45.724 "timeout_admin_us": 0, 00:13:45.724 "keep_alive_timeout_ms": 10000, 00:13:45.724 "arbitration_burst": 0, 00:13:45.724 "low_priority_weight": 0, 00:13:45.724 "medium_priority_weight": 0, 00:13:45.724 "high_priority_weight": 0, 00:13:45.724 "nvme_adminq_poll_period_us": 10000, 00:13:45.724 "nvme_ioq_poll_period_us": 0, 00:13:45.724 "io_queue_requests": 512, 00:13:45.724 "delay_cmd_submit": true, 00:13:45.724 "transport_retry_count": 4, 00:13:45.724 "bdev_retry_count": 3, 00:13:45.724 "transport_ack_timeout": 0, 00:13:45.724 "ctrlr_loss_timeout_sec": 0, 00:13:45.724 "reconnect_delay_sec": 0, 00:13:45.724 "fast_io_fail_timeout_sec": 0, 00:13:45.724 "disable_auto_failback": false, 00:13:45.724 "generate_uuids": false, 00:13:45.724 "transport_tos": 0, 00:13:45.724 "nvme_error_stat": false, 00:13:45.724 "rdma_srq_size": 0, 00:13:45.724 "io_path_stat": false, 00:13:45.724 "allow_accel_sequence": false, 00:13:45.724 "rdma_max_cq_size": 0, 00:13:45.724 "rdma_cm_event_timeout_ms": 0, 00:13:45.724 "dhchap_digests": [ 00:13:45.724 "sha256", 00:13:45.724 "sha384", 00:13:45.724 "sha512" 00:13:45.724 ], 00:13:45.724 "dhchap_dhgroups": [ 00:13:45.724 "null", 00:13:45.724 "ffdhe2048", 00:13:45.724 "ffdhe3072", 00:13:45.724 "ffdhe4096", 00:13:45.724 "ffdhe6144", 00:13:45.724 "ffdhe8192" 00:13:45.724 ] 00:13:45.724 } 00:13:45.724 }, 00:13:45.724 { 00:13:45.724 "method": "bdev_nvme_attach_controller", 00:13:45.724 "params": { 00:13:45.724 "name": "nvme0", 00:13:45.724 "trtype": "TCP", 00:13:45.724 "adrfam": "IPv4", 00:13:45.724 "traddr": "10.0.0.2", 00:13:45.724 "trsvcid": "4420", 00:13:45.724 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:45.724 "prchk_reftag": false, 00:13:45.724 "prchk_guard": false, 00:13:45.724 "ctrlr_loss_timeout_sec": 0, 00:13:45.724 "reconnect_delay_sec": 0, 00:13:45.724 "fast_io_fail_timeout_sec": 0, 00:13:45.724 "psk": "key0", 00:13:45.724 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:45.724 "hdgst": false, 00:13:45.724 "ddgst": false 00:13:45.724 } 00:13:45.724 }, 00:13:45.724 { 00:13:45.724 "method": "bdev_nvme_set_hotplug", 00:13:45.724 "params": { 00:13:45.724 "period_us": 100000, 00:13:45.724 "enable": false 00:13:45.724 } 00:13:45.724 }, 00:13:45.724 { 00:13:45.724 "method": "bdev_enable_histogram", 00:13:45.724 "params": { 00:13:45.724 "name": "nvme0n1", 00:13:45.724 "enable": true 00:13:45.724 } 00:13:45.724 }, 00:13:45.724 { 00:13:45.724 "method": "bdev_wait_for_examine" 00:13:45.724 } 00:13:45.724 ] 00:13:45.724 }, 00:13:45.724 { 00:13:45.724 "subsystem": "nbd", 00:13:45.724 "config": [] 00:13:45.724 } 00:13:45.724 ] 00:13:45.724 }' 00:13:45.724 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 73117 /var/tmp/bdevperf.sock 00:13:45.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:45.724 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73117 ']' 00:13:45.724 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:45.724 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:45.724 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:45.724 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:45.724 13:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:45.724 [2024-07-25 13:09:37.806844] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:45.724 [2024-07-25 13:09:37.806923] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73117 ] 00:13:45.984 [2024-07-25 13:09:37.947653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.984 [2024-07-25 13:09:38.049680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.242 [2024-07-25 13:09:38.184538] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:46.242 [2024-07-25 13:09:38.227728] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:46.809 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:46.809 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:46.809 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:13:46.809 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:13:46.809 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.809 13:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:46.809 Running I/O for 1 seconds... 00:13:48.186 00:13:48.186 Latency(us) 00:13:48.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.186 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:48.186 Verification LBA range: start 0x0 length 0x2000 00:13:48.186 nvme0n1 : 1.01 4434.79 17.32 0.00 0.00 28661.68 4259.84 26571.87 00:13:48.186 =================================================================================================================== 00:13:48.186 Total : 4434.79 17.32 0.00 0.00 28661.68 4259.84 26571.87 00:13:48.186 0 00:13:48.186 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:13:48.186 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:13:48.186 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:13:48.186 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:13:48.186 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:13:48.186 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:13:48.186 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:48.186 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:13:48.186 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:13:48.186 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:13:48.186 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:48.186 nvmf_trace.0 00:13:48.186 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:13:48.186 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 73117 00:13:48.186 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73117 ']' 00:13:48.186 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73117 00:13:48.186 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:48.186 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:48.186 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73117 00:13:48.186 killing process with pid 73117 00:13:48.186 Received shutdown signal, test time was about 1.000000 seconds 00:13:48.186 00:13:48.186 Latency(us) 00:13:48.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.186 =================================================================================================================== 00:13:48.186 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:48.186 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:48.186 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:48.186 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73117' 00:13:48.186 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73117 00:13:48.186 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73117 00:13:48.186 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:13:48.186 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:48.186 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:13:48.445 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:48.445 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:13:48.445 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:48.445 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:48.445 rmmod nvme_tcp 00:13:48.445 rmmod nvme_fabrics 00:13:48.445 rmmod nvme_keyring 00:13:48.445 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:48.445 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:13:48.445 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:13:48.445 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 73085 ']' 00:13:48.445 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 73085 00:13:48.445 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73085 ']' 00:13:48.445 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73085 00:13:48.445 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:48.445 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:48.445 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73085 00:13:48.445 killing process with pid 73085 00:13:48.445 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:48.445 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:48.445 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73085' 00:13:48.445 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73085 00:13:48.445 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73085 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.6Z2diUBkvM /tmp/tmp.6NCvKllKbh /tmp/tmp.jq1FUXD1rm 00:13:48.705 ************************************ 00:13:48.705 END TEST nvmf_tls 00:13:48.705 ************************************ 00:13:48.705 00:13:48.705 real 1m24.064s 00:13:48.705 user 2m10.929s 00:13:48.705 sys 0m28.302s 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:48.705 ************************************ 00:13:48.705 START TEST nvmf_fips 00:13:48.705 ************************************ 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:13:48.705 * Looking for test storage... 00:13:48.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:13:48.705 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:13:48.966 13:09:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:13:48.966 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:13:48.966 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:13:48.966 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:13:48.966 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:13:48.966 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:13:48.966 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:13:48.966 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:13:48.966 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:13:48.966 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:48.966 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:13:48.966 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:48.966 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:13:48.966 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:48.966 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:13:48.966 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:13:48.966 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:13:48.966 Error setting digest 00:13:48.966 0052288F937F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:13:48.966 0052288F937F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:13:48.966 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:13:48.966 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:48.966 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:48.966 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:48.966 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:13:48.966 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:48.966 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.966 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:48.966 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:48.966 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:48.967 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.967 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:48.967 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.967 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:48.967 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:48.967 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:48.967 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:48.967 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:48.967 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:48.967 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.967 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:48.967 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:48.967 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:48.967 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:48.967 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:48.967 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:48.967 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.967 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:48.967 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:48.967 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:48.967 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:48.967 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:48.967 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:48.967 Cannot find device "nvmf_tgt_br" 00:13:48.967 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # true 00:13:48.967 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:48.967 Cannot find device "nvmf_tgt_br2" 00:13:48.967 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # true 00:13:48.967 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:48.967 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:48.967 Cannot find device "nvmf_tgt_br" 00:13:48.967 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # true 00:13:48.967 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:48.967 Cannot find device "nvmf_tgt_br2" 00:13:48.967 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # true 00:13:48.967 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:49.226 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:49.226 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:49.226 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:49.226 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:13:49.226 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:49.226 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:49.226 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:13:49.226 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:49.226 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:49.226 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:49.226 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:49.226 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:49.226 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:49.226 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:49.226 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:49.226 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:49.226 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:49.226 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:49.226 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:49.226 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:49.226 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:49.226 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:49.226 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:49.226 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:49.226 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:49.226 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:49.226 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:49.226 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:49.227 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:49.227 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:49.227 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:49.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:49.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:13:49.227 00:13:49.227 --- 10.0.0.2 ping statistics --- 00:13:49.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.227 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:13:49.227 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:49.227 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:49.227 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:13:49.227 00:13:49.227 --- 10.0.0.3 ping statistics --- 00:13:49.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.227 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:13:49.227 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:49.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:49.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:13:49.227 00:13:49.227 --- 10.0.0.1 ping statistics --- 00:13:49.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.227 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:13:49.227 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:49.227 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:13:49.227 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:49.227 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:49.227 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:49.227 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:49.227 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:49.227 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:49.227 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:49.227 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:13:49.227 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:49.227 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:49.227 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:49.227 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:49.227 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=73381 00:13:49.227 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 73381 00:13:49.227 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 73381 ']' 00:13:49.227 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.227 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:49.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.227 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.227 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:49.227 13:09:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:49.486 [2024-07-25 13:09:41.498423] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:49.486 [2024-07-25 13:09:41.498675] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.486 [2024-07-25 13:09:41.639756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.745 [2024-07-25 13:09:41.737846] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.745 [2024-07-25 13:09:41.737911] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.745 [2024-07-25 13:09:41.737927] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:49.745 [2024-07-25 13:09:41.737939] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:49.745 [2024-07-25 13:09:41.737950] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.745 [2024-07-25 13:09:41.737995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.745 [2024-07-25 13:09:41.796278] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:50.312 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:50.312 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:13:50.312 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:50.312 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:50.312 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:50.312 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.312 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:13:50.312 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:13:50.312 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:50.312 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:13:50.312 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:50.312 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:50.312 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:50.312 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:50.571 [2024-07-25 13:09:42.652156] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:50.571 [2024-07-25 13:09:42.668114] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:50.571 [2024-07-25 13:09:42.668297] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.571 [2024-07-25 13:09:42.698516] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:50.571 malloc0 00:13:50.571 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:50.571 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=73415 00:13:50.571 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:50.571 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 73415 /var/tmp/bdevperf.sock 00:13:50.571 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 73415 ']' 00:13:50.571 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:50.571 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:50.571 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:50.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:50.571 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:50.571 13:09:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:50.830 [2024-07-25 13:09:42.811899] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:50.830 [2024-07-25 13:09:42.811990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73415 ] 00:13:50.830 [2024-07-25 13:09:42.953237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.089 [2024-07-25 13:09:43.060151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:51.089 [2024-07-25 13:09:43.117453] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:51.668 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:51.668 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:13:51.668 13:09:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:51.927 [2024-07-25 13:09:43.936384] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:51.927 [2024-07-25 13:09:43.936499] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:51.927 TLSTESTn1 00:13:51.927 13:09:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:52.186 Running I/O for 10 seconds... 00:14:02.169 00:14:02.169 Latency(us) 00:14:02.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.169 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:02.169 Verification LBA range: start 0x0 length 0x2000 00:14:02.169 TLSTESTn1 : 10.02 3745.31 14.63 0.00 0.00 34105.51 3336.38 24784.52 00:14:02.169 =================================================================================================================== 00:14:02.169 Total : 3745.31 14.63 0.00 0.00 34105.51 3336.38 24784.52 00:14:02.169 0 00:14:02.169 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:14:02.169 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:14:02.169 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:14:02.169 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:14:02.169 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:14:02.169 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:02.169 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:14:02.169 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:14:02.169 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:14:02.169 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:02.169 nvmf_trace.0 00:14:02.169 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:14:02.169 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 73415 00:14:02.169 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 73415 ']' 00:14:02.169 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 73415 00:14:02.169 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:14:02.169 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:02.169 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73415 00:14:02.169 killing process with pid 73415 00:14:02.169 Received shutdown signal, test time was about 10.000000 seconds 00:14:02.169 00:14:02.169 Latency(us) 00:14:02.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.169 =================================================================================================================== 00:14:02.169 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:02.169 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:02.169 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:02.169 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73415' 00:14:02.169 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 73415 00:14:02.169 [2024-07-25 13:09:54.296123] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:02.169 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 73415 00:14:02.427 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:14:02.427 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:02.427 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:14:02.427 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:02.427 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:14:02.427 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:02.427 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:02.427 rmmod nvme_tcp 00:14:02.427 rmmod nvme_fabrics 00:14:02.427 rmmod nvme_keyring 00:14:02.427 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:02.427 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:14:02.427 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:14:02.427 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 73381 ']' 00:14:02.427 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 73381 00:14:02.427 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 73381 ']' 00:14:02.427 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 73381 00:14:02.427 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:14:02.427 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:02.427 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73381 00:14:02.686 killing process with pid 73381 00:14:02.686 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:02.686 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:02.686 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73381' 00:14:02.686 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 73381 00:14:02.686 [2024-07-25 13:09:54.622858] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:02.686 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 73381 00:14:02.686 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:02.686 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:02.686 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:02.686 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:02.686 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:02.686 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.686 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:02.686 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.686 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:02.686 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:02.686 ************************************ 00:14:02.686 END TEST nvmf_fips 00:14:02.686 ************************************ 00:14:02.686 00:14:02.686 real 0m14.093s 00:14:02.686 user 0m19.284s 00:14:02.686 sys 0m5.648s 00:14:02.686 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:02.686 13:09:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:02.944 13:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:14:02.944 13:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ virt == phy ]] 00:14:02.944 13:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:14:02.944 ************************************ 00:14:02.945 END TEST nvmf_target_extra 00:14:02.945 ************************************ 00:14:02.945 00:14:02.945 real 4m22.298s 00:14:02.945 user 9m4.814s 00:14:02.945 sys 1m0.952s 00:14:02.945 13:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:02.945 13:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:02.945 13:09:54 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:02.945 13:09:54 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:02.945 13:09:54 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:02.945 13:09:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:02.945 ************************************ 00:14:02.945 START TEST nvmf_host 00:14:02.945 ************************************ 00:14:02.945 13:09:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:02.945 * Looking for test storage... 00:14:02.945 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:02.945 ************************************ 00:14:02.945 START TEST nvmf_identify 00:14:02.945 ************************************ 00:14:02.945 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:03.203 * Looking for test storage... 00:14:03.203 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:03.203 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:03.204 Cannot find device "nvmf_tgt_br" 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # true 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:03.204 Cannot find device "nvmf_tgt_br2" 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # true 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:03.204 Cannot find device "nvmf_tgt_br" 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # true 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:03.204 Cannot find device "nvmf_tgt_br2" 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # true 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:03.204 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:03.204 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:03.204 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:03.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:03.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:14:03.461 00:14:03.461 --- 10.0.0.2 ping statistics --- 00:14:03.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.461 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:03.461 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:03.461 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:14:03.461 00:14:03.461 --- 10.0.0.3 ping statistics --- 00:14:03.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.461 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:03.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:03.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:14:03.461 00:14:03.461 --- 10.0.0.1 ping statistics --- 00:14:03.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.461 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:03.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=73789 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 73789 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 73789 ']' 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:03.461 13:09:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:03.461 [2024-07-25 13:09:55.626967] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:03.461 [2024-07-25 13:09:55.627634] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.719 [2024-07-25 13:09:55.771519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:03.719 [2024-07-25 13:09:55.873058] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.719 [2024-07-25 13:09:55.873376] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.719 [2024-07-25 13:09:55.873585] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.719 [2024-07-25 13:09:55.873807] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.719 [2024-07-25 13:09:55.874001] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.719 [2024-07-25 13:09:55.874155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.719 [2024-07-25 13:09:55.874271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.719 [2024-07-25 13:09:55.874356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:03.719 [2024-07-25 13:09:55.874362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.977 [2024-07-25 13:09:55.931874] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:04.543 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:04.543 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:14:04.543 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:04.543 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.543 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:04.543 [2024-07-25 13:09:56.639910] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:04.543 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.543 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:04.543 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:04.543 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:04.543 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:04.543 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.543 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:04.543 Malloc0 00:14:04.543 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.543 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:04.543 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.543 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:04.802 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.802 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:04.802 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.802 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:04.802 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.802 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:04.802 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.802 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:04.802 [2024-07-25 13:09:56.748948] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:04.802 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.802 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:04.802 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.802 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:04.802 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.802 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:04.802 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.802 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:04.802 [ 00:14:04.802 { 00:14:04.802 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:04.802 "subtype": "Discovery", 00:14:04.802 "listen_addresses": [ 00:14:04.802 { 00:14:04.802 "trtype": "TCP", 00:14:04.802 "adrfam": "IPv4", 00:14:04.802 "traddr": "10.0.0.2", 00:14:04.802 "trsvcid": "4420" 00:14:04.802 } 00:14:04.802 ], 00:14:04.802 "allow_any_host": true, 00:14:04.802 "hosts": [] 00:14:04.802 }, 00:14:04.802 { 00:14:04.802 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:04.802 "subtype": "NVMe", 00:14:04.802 "listen_addresses": [ 00:14:04.802 { 00:14:04.802 "trtype": "TCP", 00:14:04.802 "adrfam": "IPv4", 00:14:04.802 "traddr": "10.0.0.2", 00:14:04.803 "trsvcid": "4420" 00:14:04.803 } 00:14:04.803 ], 00:14:04.803 "allow_any_host": true, 00:14:04.803 "hosts": [], 00:14:04.803 "serial_number": "SPDK00000000000001", 00:14:04.803 "model_number": "SPDK bdev Controller", 00:14:04.803 "max_namespaces": 32, 00:14:04.803 "min_cntlid": 1, 00:14:04.803 "max_cntlid": 65519, 00:14:04.803 "namespaces": [ 00:14:04.803 { 00:14:04.803 "nsid": 1, 00:14:04.803 "bdev_name": "Malloc0", 00:14:04.803 "name": "Malloc0", 00:14:04.803 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:04.803 "eui64": "ABCDEF0123456789", 00:14:04.803 "uuid": "76482b55-8252-43e6-9b40-611464732ca7" 00:14:04.803 } 00:14:04.803 ] 00:14:04.803 } 00:14:04.803 ] 00:14:04.803 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.803 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:04.803 [2024-07-25 13:09:56.805647] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:04.803 [2024-07-25 13:09:56.805891] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73825 ] 00:14:04.803 [2024-07-25 13:09:56.943920] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:14:04.803 [2024-07-25 13:09:56.943998] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:04.803 [2024-07-25 13:09:56.944005] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:04.803 [2024-07-25 13:09:56.944015] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:04.803 [2024-07-25 13:09:56.944023] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:04.803 [2024-07-25 13:09:56.944160] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:14:04.803 [2024-07-25 13:09:56.944238] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x147c2c0 0 00:14:04.803 [2024-07-25 13:09:56.949869] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:04.803 [2024-07-25 13:09:56.949894] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:04.803 [2024-07-25 13:09:56.949917] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:04.803 [2024-07-25 13:09:56.949920] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:04.803 [2024-07-25 13:09:56.949964] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:04.803 [2024-07-25 13:09:56.949972] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:04.803 [2024-07-25 13:09:56.949976] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x147c2c0) 00:14:04.803 [2024-07-25 13:09:56.949989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:04.803 [2024-07-25 13:09:56.950018] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14bd940, cid 0, qid 0 00:14:04.803 [2024-07-25 13:09:56.957866] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:04.803 [2024-07-25 13:09:56.957901] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:04.803 [2024-07-25 13:09:56.957906] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:04.803 [2024-07-25 13:09:56.957927] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14bd940) on tqpair=0x147c2c0 00:14:04.803 [2024-07-25 13:09:56.957939] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:04.803 [2024-07-25 13:09:56.957946] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:14:04.803 [2024-07-25 13:09:56.957952] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:14:04.803 [2024-07-25 13:09:56.957969] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:04.803 [2024-07-25 13:09:56.957975] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:04.803 [2024-07-25 13:09:56.957979] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x147c2c0) 00:14:04.803 [2024-07-25 13:09:56.957988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.803 [2024-07-25 13:09:56.958013] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14bd940, cid 0, qid 0 00:14:04.803 [2024-07-25 13:09:56.958074] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:04.803 [2024-07-25 13:09:56.958081] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:04.803 [2024-07-25 13:09:56.958084] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:04.803 [2024-07-25 13:09:56.958088] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14bd940) on tqpair=0x147c2c0 00:14:04.803 [2024-07-25 13:09:56.958094] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:14:04.803 [2024-07-25 13:09:56.958101] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:14:04.803 [2024-07-25 13:09:56.958124] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:04.803 [2024-07-25 13:09:56.958129] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:04.803 [2024-07-25 13:09:56.958132] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x147c2c0) 00:14:04.803 [2024-07-25 13:09:56.958140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.803 [2024-07-25 13:09:56.958173] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14bd940, cid 0, qid 0 00:14:04.803 [2024-07-25 13:09:56.958224] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:04.803 [2024-07-25 13:09:56.958231] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:04.803 [2024-07-25 13:09:56.958235] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:04.803 [2024-07-25 13:09:56.958239] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14bd940) on tqpair=0x147c2c0 00:14:04.803 [2024-07-25 13:09:56.958245] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:14:04.803 [2024-07-25 13:09:56.958253] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:14:04.803 [2024-07-25 13:09:56.958261] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:04.803 [2024-07-25 13:09:56.958265] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:04.803 [2024-07-25 13:09:56.958269] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x147c2c0) 00:14:04.803 [2024-07-25 13:09:56.958276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.803 [2024-07-25 13:09:56.958293] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14bd940, cid 0, qid 0 00:14:04.803 [2024-07-25 13:09:56.958344] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:04.803 [2024-07-25 13:09:56.958351] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:04.803 [2024-07-25 13:09:56.958355] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:04.803 [2024-07-25 13:09:56.958359] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14bd940) on tqpair=0x147c2c0 00:14:04.803 [2024-07-25 13:09:56.958364] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:04.803 [2024-07-25 13:09:56.958375] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:04.803 [2024-07-25 13:09:56.958379] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:04.803 [2024-07-25 13:09:56.958383] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x147c2c0) 00:14:04.803 [2024-07-25 13:09:56.958390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.803 [2024-07-25 13:09:56.958407] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14bd940, cid 0, qid 0 00:14:04.803 [2024-07-25 13:09:56.958461] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:04.803 [2024-07-25 13:09:56.958468] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:04.803 [2024-07-25 13:09:56.958472] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:04.803 [2024-07-25 13:09:56.958476] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14bd940) on tqpair=0x147c2c0 00:14:04.803 [2024-07-25 13:09:56.958481] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:14:04.803 [2024-07-25 13:09:56.958486] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:14:04.803 [2024-07-25 13:09:56.958494] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:04.803 [2024-07-25 13:09:56.958599] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:14:04.803 [2024-07-25 13:09:56.958614] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:04.803 [2024-07-25 13:09:56.958624] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:04.803 [2024-07-25 13:09:56.958628] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:04.803 [2024-07-25 13:09:56.958632] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x147c2c0) 00:14:04.803 [2024-07-25 13:09:56.958640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.803 [2024-07-25 13:09:56.958658] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14bd940, cid 0, qid 0 00:14:04.803 [2024-07-25 13:09:56.958725] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:04.803 [2024-07-25 13:09:56.958732] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:04.803 [2024-07-25 13:09:56.958735] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:04.803 [2024-07-25 13:09:56.958739] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14bd940) on tqpair=0x147c2c0 00:14:04.803 [2024-07-25 13:09:56.958744] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:04.803 [2024-07-25 13:09:56.958754] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:04.803 [2024-07-25 13:09:56.958759] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:04.803 [2024-07-25 13:09:56.958762] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x147c2c0) 00:14:04.803 [2024-07-25 13:09:56.958769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.804 [2024-07-25 13:09:56.958786] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14bd940, cid 0, qid 0 00:14:04.804 [2024-07-25 13:09:56.958847] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:04.804 [2024-07-25 13:09:56.958855] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:04.804 [2024-07-25 13:09:56.958858] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:04.804 [2024-07-25 13:09:56.958862] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14bd940) on tqpair=0x147c2c0 00:14:04.804 [2024-07-25 13:09:56.958867] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:04.804 [2024-07-25 13:09:56.958872] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:14:04.804 [2024-07-25 13:09:56.958880] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:14:04.804 [2024-07-25 13:09:56.958890] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:14:04.804 [2024-07-25 13:09:56.958901] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:04.804 [2024-07-25 13:09:56.958906] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x147c2c0) 00:14:04.804 [2024-07-25 13:09:56.958913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.804 [2024-07-25 13:09:56.958932] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14bd940, cid 0, qid 0 00:14:04.804 [2024-07-25 13:09:56.959035] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:04.804 [2024-07-25 13:09:56.959043] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:04.804 [2024-07-25 13:09:56.959046] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:04.804 [2024-07-25 13:09:56.959050] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x147c2c0): datao=0, datal=4096, cccid=0 00:14:04.804 [2024-07-25 13:09:56.959055] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14bd940) on tqpair(0x147c2c0): expected_datao=0, payload_size=4096 00:14:04.804 [2024-07-25 13:09:56.959060] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:04.804 [2024-07-25 13:09:56.959067] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:04.804 [2024-07-25 13:09:56.959072] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:04.804 [2024-07-25 13:09:56.959081] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:04.804 [2024-07-25 13:09:56.959087] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:04.804 [2024-07-25 13:09:56.959090] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:04.804 [2024-07-25 13:09:56.959094] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14bd940) on tqpair=0x147c2c0 00:14:04.804 [2024-07-25 13:09:56.959103] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:14:04.804 [2024-07-25 13:09:56.959108] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:14:04.804 [2024-07-25 13:09:56.959112] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:14:04.804 [2024-07-25 13:09:56.959122] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:14:04.804 [2024-07-25 13:09:56.959127] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:14:04.804 [2024-07-25 13:09:56.959132] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:14:04.804 [2024-07-25 13:09:56.959140] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:14:04.804 [2024-07-25 13:09:56.959148] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:04.804 [2024-07-25 13:09:56.959152] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:04.804 [2024-07-25 13:09:56.959171] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x147c2c0) 00:14:04.804 [2024-07-25 13:09:56.959179] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:04.804 [2024-07-25 13:09:56.959198] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14bd940, cid 0, qid 0 00:14:04.804 [2024-07-25 13:09:56.959257] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:04.804 [2024-07-25 13:09:56.959264] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:04.804 [2024-07-25 13:09:56.959267] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:04.804 [2024-07-25 13:09:56.959271] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14bd940) on tqpair=0x147c2c0 00:14:04.804 [2024-07-25 13:09:56.959279] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:04.804 [2024-07-25 13:09:56.959284] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:04.804 [2024-07-25 13:09:56.959288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x147c2c0) 00:14:04.804 [2024-07-25 13:09:56.959294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.804 [2024-07-25 13:09:56.959301] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:04.804 [2024-07-25 13:09:56.959305] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:04.804 [2024-07-25 13:09:56.959309] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x147c2c0) 00:14:04.804 [2024-07-25 13:09:56.959315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.804 [2024-07-25 13:09:56.959321] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:04.804 [2024-07-25 13:09:56.959325] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:04.804 [2024-07-25 13:09:56.959328] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x147c2c0) 00:14:04.804 [2024-07-25 13:09:56.959334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.804 [2024-07-25 13:09:56.959340] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:04.804 [2024-07-25 13:09:56.959344] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:04.804 [2024-07-25 13:09:56.959348] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x147c2c0) 00:14:04.804 [2024-07-25 13:09:56.959354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.804 [2024-07-25 13:09:56.959359] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:14:04.804 [2024-07-25 13:09:56.959367] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:04.804 [2024-07-25 13:09:56.959374] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:04.804 [2024-07-25 13:09:56.959378] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x147c2c0) 00:14:04.804 [2024-07-25 13:09:56.959385] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.804 [2024-07-25 13:09:56.959409] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14bd940, cid 0, qid 0 00:14:04.804 [2024-07-25 13:09:56.959416] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14bdac0, cid 1, qid 0 00:14:04.804 [2024-07-25 13:09:56.959421] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14bdc40, cid 2, qid 0 00:14:04.804 [2024-07-25 13:09:56.959426] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14bddc0, cid 3, qid 0 00:14:04.804 [2024-07-25 13:09:56.959431] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14bdf40, cid 4, qid 0 00:14:04.804 [2024-07-25 13:09:56.959539] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:04.804 [2024-07-25 13:09:56.959545] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:04.804 [2024-07-25 13:09:56.959549] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:04.804 [2024-07-25 13:09:56.959553] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14bdf40) on tqpair=0x147c2c0 00:14:04.804 [2024-07-25 13:09:56.959558] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:14:04.804 [2024-07-25 13:09:56.959564] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:14:04.804 [2024-07-25 13:09:56.959575] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:04.804 [2024-07-25 13:09:56.959580] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x147c2c0) 00:14:04.804 [2024-07-25 13:09:56.959587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.804 [2024-07-25 13:09:56.959604] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14bdf40, cid 4, qid 0 00:14:04.804 [2024-07-25 13:09:56.959663] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:04.804 [2024-07-25 13:09:56.959670] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:04.804 [2024-07-25 13:09:56.959674] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:04.804 [2024-07-25 13:09:56.959678] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x147c2c0): datao=0, datal=4096, cccid=4 00:14:04.804 [2024-07-25 13:09:56.959682] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14bdf40) on tqpair(0x147c2c0): expected_datao=0, payload_size=4096 00:14:04.804 [2024-07-25 13:09:56.959687] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:04.804 [2024-07-25 13:09:56.959694] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:04.804 [2024-07-25 13:09:56.959698] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:04.804 [2024-07-25 13:09:56.959706] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:04.804 [2024-07-25 13:09:56.959711] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:04.804 [2024-07-25 13:09:56.959715] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:04.804 [2024-07-25 13:09:56.959719] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14bdf40) on tqpair=0x147c2c0 00:14:04.804 [2024-07-25 13:09:56.959732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:14:04.804 [2024-07-25 13:09:56.959756] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:04.804 [2024-07-25 13:09:56.959761] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x147c2c0) 00:14:04.804 [2024-07-25 13:09:56.959768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.804 [2024-07-25 13:09:56.959776] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:04.804 [2024-07-25 13:09:56.959780] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:04.804 [2024-07-25 13:09:56.959783] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x147c2c0) 00:14:04.804 [2024-07-25 13:09:56.959789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.805 [2024-07-25 13:09:56.959813] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14bdf40, cid 4, qid 0 00:14:04.805 [2024-07-25 13:09:56.959820] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14be0c0, cid 5, qid 0 00:14:04.805 [2024-07-25 13:09:56.959968] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:04.805 [2024-07-25 13:09:56.959978] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:04.805 [2024-07-25 13:09:56.959981] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:04.805 [2024-07-25 13:09:56.959985] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x147c2c0): datao=0, datal=1024, cccid=4 00:14:04.805 [2024-07-25 13:09:56.959990] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14bdf40) on tqpair(0x147c2c0): expected_datao=0, payload_size=1024 00:14:04.805 [2024-07-25 13:09:56.959994] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:04.805 [2024-07-25 13:09:56.960001] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:04.805 [2024-07-25 13:09:56.960005] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:04.805 [2024-07-25 13:09:56.960011] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:04.805 [2024-07-25 13:09:56.960017] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:04.805 [2024-07-25 13:09:56.960020] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:04.805 [2024-07-25 13:09:56.960024] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14be0c0) on tqpair=0x147c2c0 00:14:04.805 [2024-07-25 13:09:56.960043] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:04.805 [2024-07-25 13:09:56.960051] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:04.805 [2024-07-25 13:09:56.960055] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:04.805 [2024-07-25 13:09:56.960058] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14bdf40) on tqpair=0x147c2c0 00:14:04.805 [2024-07-25 13:09:56.960071] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:04.805 [2024-07-25 13:09:56.960076] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x147c2c0) 00:14:04.805 [2024-07-25 13:09:56.960083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.805 [2024-07-25 13:09:56.960107] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14bdf40, cid 4, qid 0 00:14:04.805 [2024-07-25 13:09:56.960203] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:04.805 [2024-07-25 13:09:56.960210] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:04.805 [2024-07-25 13:09:56.960214] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:04.805 [2024-07-25 13:09:56.960217] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x147c2c0): datao=0, datal=3072, cccid=4 00:14:04.805 [2024-07-25 13:09:56.960222] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14bdf40) on tqpair(0x147c2c0): expected_datao=0, payload_size=3072 00:14:04.805 [2024-07-25 13:09:56.960227] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:04.805 [2024-07-25 13:09:56.960234] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:04.805 [2024-07-25 13:09:56.960239] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:04.805 [2024-07-25 13:09:56.960247] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:04.805 [2024-07-25 13:09:56.960268] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:04.805 [2024-07-25 13:09:56.960272] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:04.805 [2024-07-25 13:09:56.960276] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14bdf40) on tqpair=0x147c2c0 00:14:04.805 [2024-07-25 13:09:56.960286] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:04.805 [2024-07-25 13:09:56.960291] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x147c2c0) 00:14:04.805 [2024-07-25 13:09:56.960298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.805 [2024-07-25 13:09:56.960320] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14bdf40, cid 4, qid 0 00:14:04.805 [2024-07-25 13:09:56.960394] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:04.805 [2024-07-25 13:09:56.960401] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:04.805 [2024-07-25 13:09:56.960404] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:04.805 [2024-07-25 13:09:56.960408] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x147c2c0): datao=0, datal=8, cccid=4 00:14:04.805 [2024-07-25 13:09:56.960413] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14bdf40) on tqpair(0x147c2c0): expected_datao=0, payload_size=8 00:14:04.805 [2024-07-25 13:09:56.960417] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:04.805 [2024-07-25 13:09:56.960424] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:04.805 [2024-07-25 13:09:56.960428] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:04.805 ===================================================== 00:14:04.805 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:04.805 ===================================================== 00:14:04.805 Controller Capabilities/Features 00:14:04.805 ================================ 00:14:04.805 Vendor ID: 0000 00:14:04.805 Subsystem Vendor ID: 0000 00:14:04.805 Serial Number: .................... 00:14:04.805 Model Number: ........................................ 00:14:04.805 Firmware Version: 24.09 00:14:04.805 Recommended Arb Burst: 0 00:14:04.805 IEEE OUI Identifier: 00 00 00 00:14:04.805 Multi-path I/O 00:14:04.805 May have multiple subsystem ports: No 00:14:04.805 May have multiple controllers: No 00:14:04.805 Associated with SR-IOV VF: No 00:14:04.805 Max Data Transfer Size: 131072 00:14:04.805 Max Number of Namespaces: 0 00:14:04.805 Max Number of I/O Queues: 1024 00:14:04.805 NVMe Specification Version (VS): 1.3 00:14:04.805 NVMe Specification Version (Identify): 1.3 00:14:04.805 Maximum Queue Entries: 128 00:14:04.805 Contiguous Queues Required: Yes 00:14:04.805 Arbitration Mechanisms Supported 00:14:04.805 Weighted Round Robin: Not Supported 00:14:04.805 Vendor Specific: Not Supported 00:14:04.805 Reset Timeout: 15000 ms 00:14:04.805 Doorbell Stride: 4 bytes 00:14:04.805 NVM Subsystem Reset: Not Supported 00:14:04.805 Command Sets Supported 00:14:04.805 NVM Command Set: Supported 00:14:04.805 Boot Partition: Not Supported 00:14:04.805 Memory Page Size Minimum: 4096 bytes 00:14:04.805 Memory Page Size Maximum: 4096 bytes 00:14:04.805 Persistent Memory Region: Not Supported 00:14:04.805 Optional Asynchronous Events Supported 00:14:04.805 Namespace Attribute Notices: Not Supported 00:14:04.805 Firmware Activation Notices: Not Supported 00:14:04.805 ANA Change Notices: Not Supported 00:14:04.805 PLE Aggregate Log Change Notices: Not Supported 00:14:04.805 LBA Status Info Alert Notices: Not Supported 00:14:04.805 EGE Aggregate Log Change Notices: Not Supported 00:14:04.805 Normal NVM Subsystem Shutdown event: Not Supported 00:14:04.805 Zone Descriptor Change Notices: Not Supported 00:14:04.805 Discovery Log Change Notices: Supported 00:14:04.805 Controller Attributes 00:14:04.805 128-bit Host Identifier: Not Supported 00:14:04.805 Non-Operational Permissive Mode: Not Supported 00:14:04.805 NVM Sets: Not Supported 00:14:04.805 Read Recovery Levels: Not Supported 00:14:04.805 Endurance Groups: Not Supported 00:14:04.805 Predictable Latency Mode: Not Supported 00:14:04.805 Traffic Based Keep ALive: Not Supported 00:14:04.805 Namespace Granularity: Not Supported 00:14:04.805 SQ Associations: Not Supported 00:14:04.805 UUID List: Not Supported 00:14:04.805 Multi-Domain Subsystem: Not Supported 00:14:04.805 Fixed Capacity Management: Not Supported 00:14:04.805 Variable Capacity Management: Not Supported 00:14:04.805 Delete Endurance Group: Not Supported 00:14:04.805 Delete NVM Set: Not Supported 00:14:04.805 Extended LBA Formats Supported: Not Supported 00:14:04.805 Flexible Data Placement Supported: Not Supported 00:14:04.805 00:14:04.805 Controller Memory Buffer Support 00:14:04.805 ================================ 00:14:04.805 Supported: No 00:14:04.805 00:14:04.805 Persistent Memory Region Support 00:14:04.805 ================================ 00:14:04.805 Supported: No 00:14:04.805 00:14:04.805 Admin Command Set Attributes 00:14:04.805 ============================ 00:14:04.805 Security Send/Receive: Not Supported 00:14:04.805 Format NVM: Not Supported 00:14:04.805 Firmware Activate/Download: Not Supported 00:14:04.805 Namespace Management: Not Supported 00:14:04.805 Device Self-Test: Not Supported 00:14:04.805 Directives: Not Supported 00:14:04.805 NVMe-MI: Not Supported 00:14:04.805 Virtualization Management: Not Supported 00:14:04.805 Doorbell Buffer Config: Not Supported 00:14:04.805 Get LBA Status Capability: Not Supported 00:14:04.805 Command & Feature Lockdown Capability: Not Supported 00:14:04.805 Abort Command Limit: 1 00:14:04.805 Async Event Request Limit: 4 00:14:04.805 Number of Firmware Slots: N/A 00:14:04.805 Firmware Slot 1 Read-Only: N/A 00:14:04.805 Firmware Activation Without Reset: N/A 00:14:04.805 Multiple Update Detection Support: N/A 00:14:04.806 Firmware Update Granularity: No Information Provided 00:14:04.806 Per-Namespace SMART Log: No 00:14:04.806 Asymmetric Namespace Access Log Page: Not Supported 00:14:04.806 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:04.806 Command Effects Log Page: Not Supported 00:14:04.806 Get Log Page Extended Data: Supported 00:14:04.806 Telemetry Log Pages: Not Supported 00:14:04.806 Persistent Event Log Pages: Not Supported 00:14:04.806 Supported Log Pages Log Page: May Support 00:14:04.806 Commands Supported & Effects Log Page: Not Supported 00:14:04.806 Feature Identifiers & Effects Log Page:May Support 00:14:04.806 NVMe-MI Commands & Effects Log Page: May Support 00:14:04.806 Data Area 4 for Telemetry Log: Not Supported 00:14:04.806 Error Log Page Entries Supported: 128 00:14:04.806 Keep Alive: Not Supported 00:14:04.806 00:14:04.806 NVM Command Set Attributes 00:14:04.806 ========================== 00:14:04.806 Submission Queue Entry Size 00:14:04.806 Max: 1 00:14:04.806 Min: 1 00:14:04.806 Completion Queue Entry Size 00:14:04.806 Max: 1 00:14:04.806 Min: 1 00:14:04.806 Number of Namespaces: 0 00:14:04.806 Compare Command: Not Supported 00:14:04.806 Write Uncorrectable Command: Not Supported 00:14:04.806 Dataset Management Command: Not Supported 00:14:04.806 Write Zeroes Command: Not Supported 00:14:04.806 Set Features Save Field: Not Supported 00:14:04.806 Reservations: Not Supported 00:14:04.806 Timestamp: Not Supported 00:14:04.806 Copy: Not Supported 00:14:04.806 Volatile Write Cache: Not Present 00:14:04.806 Atomic Write Unit (Normal): 1 00:14:04.806 Atomic Write Unit (PFail): 1 00:14:04.806 Atomic Compare & Write Unit: 1 00:14:04.806 Fused Compare & Write: Supported 00:14:04.806 Scatter-Gather List 00:14:04.806 SGL Command Set: Supported 00:14:04.806 SGL Keyed: Supported 00:14:04.806 SGL Bit Bucket Descriptor: Not Supported 00:14:04.806 SGL Metadata Pointer: Not Supported 00:14:04.806 Oversized SGL: Not Supported 00:14:04.806 SGL Metadata Address: Not Supported 00:14:04.806 SGL Offset: Supported 00:14:04.806 Transport SGL Data Block: Not Supported 00:14:04.806 Replay Protected Memory Block: Not Supported 00:14:04.806 00:14:04.806 Firmware Slot Information 00:14:04.806 ========================= 00:14:04.806 Active slot: 0 00:14:04.806 00:14:04.806 00:14:04.806 Error Log 00:14:04.806 ========= 00:14:04.806 00:14:04.806 Active Namespaces 00:14:04.806 ================= 00:14:04.806 Discovery Log Page 00:14:04.806 ================== 00:14:04.806 Generation Counter: 2 00:14:04.806 Number of Records: 2 00:14:04.806 Record Format: 0 00:14:04.806 00:14:04.806 Discovery Log Entry 0 00:14:04.806 ---------------------- 00:14:04.806 Transport Type: 3 (TCP) 00:14:04.806 Address Family: 1 (IPv4) 00:14:04.806 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:04.806 Entry Flags: 00:14:04.806 Duplicate Returned Information: 1 00:14:04.806 Explicit Persistent Connection Support for Discovery: 1 00:14:04.806 Transport Requirements: 00:14:04.806 Secure Channel: Not Required 00:14:04.806 Port ID: 0 (0x0000) 00:14:04.806 Controller ID: 65535 (0xffff) 00:14:04.806 Admin Max SQ Size: 128 00:14:04.806 Transport Service Identifier: 4420 00:14:04.806 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:04.806 Transport Address: 10.0.0.2 00:14:04.806 Discovery Log Entry 1 00:14:04.806 ---------------------- 00:14:04.806 Transport Type: 3 (TCP) 00:14:04.806 Address Family: 1 (IPv4) 00:14:04.806 Subsystem Type: 2 (NVM Subsystem) 00:14:04.806 Entry Flags: 00:14:04.806 Duplicate Returned Information: 0 00:14:04.806 Explicit Persistent Connection Support for Discovery: 0 00:14:04.806 Transport Requirements: 00:14:04.806 Secure Channel: Not Required 00:14:04.806 Port ID: 0 (0x0000) 00:14:04.806 Controller ID: 65535 (0xffff) 00:14:04.806 Admin Max SQ Size: 128 00:14:04.806 Transport Service Identifier: 4420 00:14:04.806 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:04.806 Transport Address: 10.0.0.2 [2024-07-25 13:09:56.960442] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:04.806 [2024-07-25 13:09:56.960450] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:04.806 [2024-07-25 13:09:56.960453] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:04.806 [2024-07-25 13:09:56.960457] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14bdf40) on tqpair=0x147c2c0 00:14:04.806 [2024-07-25 13:09:56.960586] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:14:04.806 [2024-07-25 13:09:56.960603] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14bd940) on tqpair=0x147c2c0 00:14:04.806 [2024-07-25 13:09:56.960611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.806 [2024-07-25 13:09:56.960617] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14bdac0) on tqpair=0x147c2c0 00:14:04.806 [2024-07-25 13:09:56.960622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.806 [2024-07-25 13:09:56.960627] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14bdc40) on tqpair=0x147c2c0 00:14:04.806 [2024-07-25 13:09:56.960631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.806 [2024-07-25 13:09:56.960636] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14bddc0) on tqpair=0x147c2c0 00:14:04.806 [2024-07-25 13:09:56.960641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.806 [2024-07-25 13:09:56.960651] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:04.806 [2024-07-25 13:09:56.960655] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:04.806 [2024-07-25 13:09:56.960659] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x147c2c0) 00:14:04.806 [2024-07-25 13:09:56.960667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.806 [2024-07-25 13:09:56.960718] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14bddc0, cid 3, qid 0 00:14:04.806 [2024-07-25 13:09:56.960777] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:04.806 [2024-07-25 13:09:56.960784] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:04.806 [2024-07-25 13:09:56.960788] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:04.806 [2024-07-25 13:09:56.960792] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14bddc0) on tqpair=0x147c2c0 00:14:04.806 [2024-07-25 13:09:56.960805] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:04.806 [2024-07-25 13:09:56.960810] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:04.806 [2024-07-25 13:09:56.960814] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x147c2c0) 00:14:04.806 [2024-07-25 13:09:56.960821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.806 [2024-07-25 13:09:56.960844] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14bddc0, cid 3, qid 0 00:14:04.806 [2024-07-25 13:09:56.960932] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:04.806 [2024-07-25 13:09:56.960940] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:04.806 [2024-07-25 13:09:56.960944] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:04.806 [2024-07-25 13:09:56.960948] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14bddc0) on tqpair=0x147c2c0 00:14:04.806 [2024-07-25 13:09:56.960953] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:14:04.806 [2024-07-25 13:09:56.960959] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:14:04.806 [2024-07-25 13:09:56.960969] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:04.806 [2024-07-25 13:09:56.960974] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:04.806 [2024-07-25 13:09:56.960978] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x147c2c0) 00:14:04.806 [2024-07-25 13:09:56.960986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.806 [2024-07-25 13:09:56.961016] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14bddc0, cid 3, qid 0 00:14:04.806 [2024-07-25 13:09:56.961065] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:04.806 [2024-07-25 13:09:56.961071] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:04.806 [2024-07-25 13:09:56.961075] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:04.806 [2024-07-25 13:09:56.961079] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14bddc0) on tqpair=0x147c2c0 00:14:04.806 [2024-07-25 13:09:56.961090] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:04.806 [2024-07-25 13:09:56.961094] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:04.806 [2024-07-25 13:09:56.961098] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x147c2c0) 00:14:04.806 [2024-07-25 13:09:56.961105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.806 [2024-07-25 13:09:56.961121] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14bddc0, cid 3, qid 0 00:14:04.807 [2024-07-25 13:09:56.961182] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:04.807 [2024-07-25 13:09:56.961189] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:04.807 [2024-07-25 13:09:56.961193] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:04.807 [2024-07-25 13:09:56.961197] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14bddc0) on tqpair=0x147c2c0 00:14:04.807 [2024-07-25 13:09:56.961207] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:04.807 [2024-07-25 13:09:56.961212] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:04.807 [2024-07-25 13:09:56.961216] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x147c2c0) 00:14:04.807 [2024-07-25 13:09:56.961224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.807 [2024-07-25 13:09:56.961240] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14bddc0, cid 3, qid 0 00:14:04.807 [2024-07-25 13:09:56.961296] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:04.807 [2024-07-25 13:09:56.961303] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:04.807 [2024-07-25 13:09:56.961306] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:04.807 [2024-07-25 13:09:56.961310] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14bddc0) on tqpair=0x147c2c0 00:14:04.807 [2024-07-25 13:09:56.961320] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:04.807 [2024-07-25 13:09:56.961325] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:04.807 [2024-07-25 13:09:56.961329] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x147c2c0) 00:14:04.807 [2024-07-25 13:09:56.961336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.807 [2024-07-25 13:09:56.961352] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14bddc0, cid 3, qid 0 00:14:04.807 [2024-07-25 13:09:56.961408] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:04.807 [2024-07-25 13:09:56.961415] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:04.807 [2024-07-25 13:09:56.961419] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:04.807 [2024-07-25 13:09:56.961423] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14bddc0) on tqpair=0x147c2c0 00:14:04.807 [2024-07-25 13:09:56.961433] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:04.807 [2024-07-25 13:09:56.961437] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:04.807 [2024-07-25 13:09:56.961441] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x147c2c0) 00:14:04.807 [2024-07-25 13:09:56.961448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.807 [2024-07-25 13:09:56.961464] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14bddc0, cid 3, qid 0 00:14:04.807 [2024-07-25 13:09:56.961527] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:04.807 [2024-07-25 13:09:56.961534] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:04.807 [2024-07-25 13:09:56.961538] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:04.807 [2024-07-25 13:09:56.961542] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14bddc0) on tqpair=0x147c2c0 00:14:04.807 [2024-07-25 13:09:56.961551] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:04.807 [2024-07-25 13:09:56.961556] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:04.807 [2024-07-25 13:09:56.961560] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x147c2c0) 00:14:04.807 [2024-07-25 13:09:56.961566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.807 [2024-07-25 13:09:56.961582] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14bddc0, cid 3, qid 0 00:14:04.807 [2024-07-25 13:09:56.961633] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:04.807 [2024-07-25 13:09:56.961639] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:04.807 [2024-07-25 13:09:56.961643] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:04.807 [2024-07-25 13:09:56.961647] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14bddc0) on tqpair=0x147c2c0 00:14:04.807 [2024-07-25 13:09:56.961657] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:04.807 [2024-07-25 13:09:56.961661] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:04.807 [2024-07-25 13:09:56.961665] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x147c2c0) 00:14:04.807 [2024-07-25 13:09:56.961672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.807 [2024-07-25 13:09:56.961688] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14bddc0, cid 3, qid 0 00:14:04.807 [2024-07-25 13:09:56.961733] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:04.807 [2024-07-25 13:09:56.961739] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:04.807 [2024-07-25 13:09:56.961743] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:04.807 [2024-07-25 13:09:56.961747] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14bddc0) on tqpair=0x147c2c0 00:14:04.807 [2024-07-25 13:09:56.961756] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:04.807 [2024-07-25 13:09:56.961761] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:04.807 [2024-07-25 13:09:56.961765] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x147c2c0) 00:14:04.807 [2024-07-25 13:09:56.961771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.807 [2024-07-25 13:09:56.961787] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14bddc0, cid 3, qid 0 00:14:04.807 [2024-07-25 13:09:56.961840] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:04.807 [2024-07-25 13:09:56.961847] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:04.807 [2024-07-25 13:09:56.961850] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:04.807 [2024-07-25 13:09:56.961854] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14bddc0) on tqpair=0x147c2c0 00:14:04.807 [2024-07-25 13:09:56.961864] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:04.807 [2024-07-25 13:09:56.965949] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:04.807 [2024-07-25 13:09:56.965974] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x147c2c0) 00:14:04.807 [2024-07-25 13:09:56.965985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:04.807 [2024-07-25 13:09:56.966017] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14bddc0, cid 3, qid 0 00:14:04.807 [2024-07-25 13:09:56.966090] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:04.807 [2024-07-25 13:09:56.966098] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:04.807 [2024-07-25 13:09:56.966102] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:04.807 [2024-07-25 13:09:56.966106] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14bddc0) on tqpair=0x147c2c0 00:14:04.807 [2024-07-25 13:09:56.966116] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:14:04.807 00:14:04.807 13:09:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:05.069 [2024-07-25 13:09:57.005431] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:05.069 [2024-07-25 13:09:57.005467] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73827 ] 00:14:05.069 [2024-07-25 13:09:57.139878] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:14:05.069 [2024-07-25 13:09:57.139959] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:05.069 [2024-07-25 13:09:57.139967] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:05.069 [2024-07-25 13:09:57.139976] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:05.069 [2024-07-25 13:09:57.139983] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:05.069 [2024-07-25 13:09:57.140073] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:14:05.069 [2024-07-25 13:09:57.140111] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x12602c0 0 00:14:05.069 [2024-07-25 13:09:57.153899] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:05.069 [2024-07-25 13:09:57.153921] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:05.069 [2024-07-25 13:09:57.153942] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:05.069 [2024-07-25 13:09:57.153946] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:05.069 [2024-07-25 13:09:57.153982] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.069 [2024-07-25 13:09:57.153989] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.069 [2024-07-25 13:09:57.153993] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12602c0) 00:14:05.069 [2024-07-25 13:09:57.154003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:05.069 [2024-07-25 13:09:57.154032] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a1940, cid 0, qid 0 00:14:05.069 [2024-07-25 13:09:57.161865] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.069 [2024-07-25 13:09:57.161887] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.069 [2024-07-25 13:09:57.161907] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.069 [2024-07-25 13:09:57.161912] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a1940) on tqpair=0x12602c0 00:14:05.069 [2024-07-25 13:09:57.161921] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:05.069 [2024-07-25 13:09:57.161928] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:14:05.069 [2024-07-25 13:09:57.161934] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:14:05.069 [2024-07-25 13:09:57.161949] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.069 [2024-07-25 13:09:57.161953] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.069 [2024-07-25 13:09:57.161957] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12602c0) 00:14:05.069 [2024-07-25 13:09:57.161966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.069 [2024-07-25 13:09:57.161991] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a1940, cid 0, qid 0 00:14:05.069 [2024-07-25 13:09:57.162046] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.069 [2024-07-25 13:09:57.162053] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.069 [2024-07-25 13:09:57.162056] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.069 [2024-07-25 13:09:57.162060] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a1940) on tqpair=0x12602c0 00:14:05.069 [2024-07-25 13:09:57.162065] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:14:05.069 [2024-07-25 13:09:57.162072] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:14:05.069 [2024-07-25 13:09:57.162079] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.069 [2024-07-25 13:09:57.162083] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.069 [2024-07-25 13:09:57.162086] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12602c0) 00:14:05.069 [2024-07-25 13:09:57.162094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.069 [2024-07-25 13:09:57.162112] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a1940, cid 0, qid 0 00:14:05.069 [2024-07-25 13:09:57.162186] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.069 [2024-07-25 13:09:57.162193] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.069 [2024-07-25 13:09:57.162197] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.069 [2024-07-25 13:09:57.162201] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a1940) on tqpair=0x12602c0 00:14:05.069 [2024-07-25 13:09:57.162207] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:14:05.069 [2024-07-25 13:09:57.162214] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:14:05.069 [2024-07-25 13:09:57.162222] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.069 [2024-07-25 13:09:57.162226] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.069 [2024-07-25 13:09:57.162229] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12602c0) 00:14:05.069 [2024-07-25 13:09:57.162237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.069 [2024-07-25 13:09:57.162255] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a1940, cid 0, qid 0 00:14:05.069 [2024-07-25 13:09:57.162301] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.069 [2024-07-25 13:09:57.162307] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.069 [2024-07-25 13:09:57.162311] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.069 [2024-07-25 13:09:57.162315] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a1940) on tqpair=0x12602c0 00:14:05.069 [2024-07-25 13:09:57.162320] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:05.069 [2024-07-25 13:09:57.162330] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.069 [2024-07-25 13:09:57.162334] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.069 [2024-07-25 13:09:57.162339] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12602c0) 00:14:05.069 [2024-07-25 13:09:57.162346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.069 [2024-07-25 13:09:57.162364] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a1940, cid 0, qid 0 00:14:05.069 [2024-07-25 13:09:57.162404] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.069 [2024-07-25 13:09:57.162411] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.069 [2024-07-25 13:09:57.162414] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.069 [2024-07-25 13:09:57.162418] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a1940) on tqpair=0x12602c0 00:14:05.069 [2024-07-25 13:09:57.162423] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:14:05.069 [2024-07-25 13:09:57.162428] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:14:05.069 [2024-07-25 13:09:57.162436] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:05.069 [2024-07-25 13:09:57.162541] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:14:05.069 [2024-07-25 13:09:57.162546] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:05.069 [2024-07-25 13:09:57.162554] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.069 [2024-07-25 13:09:57.162558] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.069 [2024-07-25 13:09:57.162562] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12602c0) 00:14:05.069 [2024-07-25 13:09:57.162569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.069 [2024-07-25 13:09:57.162587] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a1940, cid 0, qid 0 00:14:05.069 [2024-07-25 13:09:57.162631] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.069 [2024-07-25 13:09:57.162638] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.069 [2024-07-25 13:09:57.162642] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.069 [2024-07-25 13:09:57.162646] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a1940) on tqpair=0x12602c0 00:14:05.069 [2024-07-25 13:09:57.162650] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:05.069 [2024-07-25 13:09:57.162660] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.069 [2024-07-25 13:09:57.162665] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.069 [2024-07-25 13:09:57.162669] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12602c0) 00:14:05.069 [2024-07-25 13:09:57.162676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.070 [2024-07-25 13:09:57.162693] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a1940, cid 0, qid 0 00:14:05.070 [2024-07-25 13:09:57.162736] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.070 [2024-07-25 13:09:57.162743] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.070 [2024-07-25 13:09:57.162746] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.070 [2024-07-25 13:09:57.162750] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a1940) on tqpair=0x12602c0 00:14:05.070 [2024-07-25 13:09:57.162755] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:05.070 [2024-07-25 13:09:57.162760] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:14:05.070 [2024-07-25 13:09:57.162768] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:14:05.070 [2024-07-25 13:09:57.162778] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:14:05.070 [2024-07-25 13:09:57.162788] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.070 [2024-07-25 13:09:57.162792] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12602c0) 00:14:05.070 [2024-07-25 13:09:57.162799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.070 [2024-07-25 13:09:57.162818] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a1940, cid 0, qid 0 00:14:05.070 [2024-07-25 13:09:57.162907] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.070 [2024-07-25 13:09:57.162916] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.070 [2024-07-25 13:09:57.162920] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.070 [2024-07-25 13:09:57.162924] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12602c0): datao=0, datal=4096, cccid=0 00:14:05.070 [2024-07-25 13:09:57.162928] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12a1940) on tqpair(0x12602c0): expected_datao=0, payload_size=4096 00:14:05.070 [2024-07-25 13:09:57.162933] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.070 [2024-07-25 13:09:57.162940] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.070 [2024-07-25 13:09:57.162945] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.070 [2024-07-25 13:09:57.162953] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.070 [2024-07-25 13:09:57.162959] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.070 [2024-07-25 13:09:57.162962] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.070 [2024-07-25 13:09:57.162966] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a1940) on tqpair=0x12602c0 00:14:05.070 [2024-07-25 13:09:57.162974] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:14:05.070 [2024-07-25 13:09:57.162979] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:14:05.070 [2024-07-25 13:09:57.162984] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:14:05.070 [2024-07-25 13:09:57.162992] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:14:05.070 [2024-07-25 13:09:57.162997] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:14:05.070 [2024-07-25 13:09:57.163002] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:14:05.070 [2024-07-25 13:09:57.163011] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:14:05.070 [2024-07-25 13:09:57.163018] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.070 [2024-07-25 13:09:57.163023] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.070 [2024-07-25 13:09:57.163026] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12602c0) 00:14:05.070 [2024-07-25 13:09:57.163034] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:05.070 [2024-07-25 13:09:57.163055] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a1940, cid 0, qid 0 00:14:05.070 [2024-07-25 13:09:57.163118] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.070 [2024-07-25 13:09:57.163125] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.070 [2024-07-25 13:09:57.163129] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.070 [2024-07-25 13:09:57.163133] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a1940) on tqpair=0x12602c0 00:14:05.070 [2024-07-25 13:09:57.163140] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.070 [2024-07-25 13:09:57.163145] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.070 [2024-07-25 13:09:57.163149] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12602c0) 00:14:05.070 [2024-07-25 13:09:57.163155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.070 [2024-07-25 13:09:57.163162] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.070 [2024-07-25 13:09:57.163166] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.070 [2024-07-25 13:09:57.163169] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x12602c0) 00:14:05.070 [2024-07-25 13:09:57.163175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.070 [2024-07-25 13:09:57.163181] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.070 [2024-07-25 13:09:57.163185] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.070 [2024-07-25 13:09:57.163189] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x12602c0) 00:14:05.070 [2024-07-25 13:09:57.163195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.070 [2024-07-25 13:09:57.163202] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.070 [2024-07-25 13:09:57.163205] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.070 [2024-07-25 13:09:57.163209] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12602c0) 00:14:05.070 [2024-07-25 13:09:57.163215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.070 [2024-07-25 13:09:57.163220] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:05.070 [2024-07-25 13:09:57.163228] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:05.070 [2024-07-25 13:09:57.163235] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.070 [2024-07-25 13:09:57.163239] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12602c0) 00:14:05.070 [2024-07-25 13:09:57.163246] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.070 [2024-07-25 13:09:57.163271] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a1940, cid 0, qid 0 00:14:05.070 [2024-07-25 13:09:57.163278] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a1ac0, cid 1, qid 0 00:14:05.070 [2024-07-25 13:09:57.163283] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a1c40, cid 2, qid 0 00:14:05.070 [2024-07-25 13:09:57.163288] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a1dc0, cid 3, qid 0 00:14:05.070 [2024-07-25 13:09:57.163293] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a1f40, cid 4, qid 0 00:14:05.070 [2024-07-25 13:09:57.163373] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.070 [2024-07-25 13:09:57.163380] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.070 [2024-07-25 13:09:57.163384] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.070 [2024-07-25 13:09:57.163388] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a1f40) on tqpair=0x12602c0 00:14:05.070 [2024-07-25 13:09:57.163393] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:14:05.070 [2024-07-25 13:09:57.163399] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:05.070 [2024-07-25 13:09:57.163407] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:14:05.070 [2024-07-25 13:09:57.163414] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:05.070 [2024-07-25 13:09:57.163421] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.070 [2024-07-25 13:09:57.163425] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.070 [2024-07-25 13:09:57.163429] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12602c0) 00:14:05.070 [2024-07-25 13:09:57.163436] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:05.070 [2024-07-25 13:09:57.163454] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a1f40, cid 4, qid 0 00:14:05.070 [2024-07-25 13:09:57.163506] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.070 [2024-07-25 13:09:57.163513] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.070 [2024-07-25 13:09:57.163517] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.070 [2024-07-25 13:09:57.163521] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a1f40) on tqpair=0x12602c0 00:14:05.070 [2024-07-25 13:09:57.163585] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:14:05.070 [2024-07-25 13:09:57.163597] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:05.070 [2024-07-25 13:09:57.163606] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.070 [2024-07-25 13:09:57.163610] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12602c0) 00:14:05.070 [2024-07-25 13:09:57.163618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.070 [2024-07-25 13:09:57.163638] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a1f40, cid 4, qid 0 00:14:05.070 [2024-07-25 13:09:57.163694] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.070 [2024-07-25 13:09:57.163701] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.070 [2024-07-25 13:09:57.163705] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.070 [2024-07-25 13:09:57.163709] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12602c0): datao=0, datal=4096, cccid=4 00:14:05.070 [2024-07-25 13:09:57.163713] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12a1f40) on tqpair(0x12602c0): expected_datao=0, payload_size=4096 00:14:05.070 [2024-07-25 13:09:57.163718] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.070 [2024-07-25 13:09:57.163725] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.071 [2024-07-25 13:09:57.163744] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.071 [2024-07-25 13:09:57.163752] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.071 [2024-07-25 13:09:57.163758] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.071 [2024-07-25 13:09:57.163761] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.071 [2024-07-25 13:09:57.163765] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a1f40) on tqpair=0x12602c0 00:14:05.071 [2024-07-25 13:09:57.163776] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:14:05.071 [2024-07-25 13:09:57.163787] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:14:05.071 [2024-07-25 13:09:57.163798] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:14:05.071 [2024-07-25 13:09:57.163806] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.071 [2024-07-25 13:09:57.163810] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12602c0) 00:14:05.071 [2024-07-25 13:09:57.163817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.071 [2024-07-25 13:09:57.163837] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a1f40, cid 4, qid 0 00:14:05.071 [2024-07-25 13:09:57.163930] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.071 [2024-07-25 13:09:57.163938] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.071 [2024-07-25 13:09:57.163942] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.071 [2024-07-25 13:09:57.163946] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12602c0): datao=0, datal=4096, cccid=4 00:14:05.071 [2024-07-25 13:09:57.163951] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12a1f40) on tqpair(0x12602c0): expected_datao=0, payload_size=4096 00:14:05.071 [2024-07-25 13:09:57.163955] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.071 [2024-07-25 13:09:57.163962] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.071 [2024-07-25 13:09:57.163966] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.071 [2024-07-25 13:09:57.163975] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.071 [2024-07-25 13:09:57.163981] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.071 [2024-07-25 13:09:57.163985] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.071 [2024-07-25 13:09:57.163989] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a1f40) on tqpair=0x12602c0 00:14:05.071 [2024-07-25 13:09:57.164004] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:05.071 [2024-07-25 13:09:57.164015] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:05.071 [2024-07-25 13:09:57.164024] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.071 [2024-07-25 13:09:57.164028] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12602c0) 00:14:05.071 [2024-07-25 13:09:57.164035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.071 [2024-07-25 13:09:57.164057] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a1f40, cid 4, qid 0 00:14:05.071 [2024-07-25 13:09:57.164113] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.071 [2024-07-25 13:09:57.164121] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.071 [2024-07-25 13:09:57.164124] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.071 [2024-07-25 13:09:57.164128] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12602c0): datao=0, datal=4096, cccid=4 00:14:05.071 [2024-07-25 13:09:57.164133] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12a1f40) on tqpair(0x12602c0): expected_datao=0, payload_size=4096 00:14:05.071 [2024-07-25 13:09:57.164137] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.071 [2024-07-25 13:09:57.164144] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.071 [2024-07-25 13:09:57.164148] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.071 [2024-07-25 13:09:57.164156] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.071 [2024-07-25 13:09:57.164162] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.071 [2024-07-25 13:09:57.164166] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.071 [2024-07-25 13:09:57.164170] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a1f40) on tqpair=0x12602c0 00:14:05.071 [2024-07-25 13:09:57.164179] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:05.071 [2024-07-25 13:09:57.164188] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:14:05.071 [2024-07-25 13:09:57.164198] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:14:05.071 [2024-07-25 13:09:57.164205] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:05.071 [2024-07-25 13:09:57.164210] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:05.071 [2024-07-25 13:09:57.164216] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:14:05.071 [2024-07-25 13:09:57.164221] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:14:05.071 [2024-07-25 13:09:57.164226] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:14:05.071 [2024-07-25 13:09:57.164231] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:14:05.071 [2024-07-25 13:09:57.164246] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.071 [2024-07-25 13:09:57.164250] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12602c0) 00:14:05.071 [2024-07-25 13:09:57.164258] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.071 [2024-07-25 13:09:57.164265] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.071 [2024-07-25 13:09:57.164269] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.071 [2024-07-25 13:09:57.164273] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12602c0) 00:14:05.071 [2024-07-25 13:09:57.164279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.071 [2024-07-25 13:09:57.164302] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a1f40, cid 4, qid 0 00:14:05.071 [2024-07-25 13:09:57.164310] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a20c0, cid 5, qid 0 00:14:05.071 [2024-07-25 13:09:57.164371] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.071 [2024-07-25 13:09:57.164378] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.071 [2024-07-25 13:09:57.164381] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.071 [2024-07-25 13:09:57.164385] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a1f40) on tqpair=0x12602c0 00:14:05.071 [2024-07-25 13:09:57.164392] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.071 [2024-07-25 13:09:57.164398] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.071 [2024-07-25 13:09:57.164402] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.071 [2024-07-25 13:09:57.164406] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a20c0) on tqpair=0x12602c0 00:14:05.071 [2024-07-25 13:09:57.164416] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.071 [2024-07-25 13:09:57.164420] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12602c0) 00:14:05.071 [2024-07-25 13:09:57.164427] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.071 [2024-07-25 13:09:57.164445] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a20c0, cid 5, qid 0 00:14:05.071 [2024-07-25 13:09:57.164490] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.071 [2024-07-25 13:09:57.164497] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.071 [2024-07-25 13:09:57.164501] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.071 [2024-07-25 13:09:57.164505] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a20c0) on tqpair=0x12602c0 00:14:05.071 [2024-07-25 13:09:57.164515] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.071 [2024-07-25 13:09:57.164519] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12602c0) 00:14:05.071 [2024-07-25 13:09:57.164526] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.071 [2024-07-25 13:09:57.164558] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a20c0, cid 5, qid 0 00:14:05.071 [2024-07-25 13:09:57.164610] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.071 [2024-07-25 13:09:57.164616] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.071 [2024-07-25 13:09:57.164620] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.071 [2024-07-25 13:09:57.164624] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a20c0) on tqpair=0x12602c0 00:14:05.071 [2024-07-25 13:09:57.164634] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.071 [2024-07-25 13:09:57.164638] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12602c0) 00:14:05.071 [2024-07-25 13:09:57.164645] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.071 [2024-07-25 13:09:57.164662] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a20c0, cid 5, qid 0 00:14:05.071 [2024-07-25 13:09:57.164752] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.071 [2024-07-25 13:09:57.164760] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.071 [2024-07-25 13:09:57.164764] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.071 [2024-07-25 13:09:57.164768] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a20c0) on tqpair=0x12602c0 00:14:05.071 [2024-07-25 13:09:57.164786] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.071 [2024-07-25 13:09:57.164792] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12602c0) 00:14:05.071 [2024-07-25 13:09:57.164799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.071 [2024-07-25 13:09:57.164808] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.071 [2024-07-25 13:09:57.164812] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12602c0) 00:14:05.071 [2024-07-25 13:09:57.164819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.072 [2024-07-25 13:09:57.164827] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.072 [2024-07-25 13:09:57.164831] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x12602c0) 00:14:05.072 [2024-07-25 13:09:57.164837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.072 [2024-07-25 13:09:57.164845] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.072 [2024-07-25 13:09:57.164849] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x12602c0) 00:14:05.072 [2024-07-25 13:09:57.164856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.072 [2024-07-25 13:09:57.164892] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a20c0, cid 5, qid 0 00:14:05.072 [2024-07-25 13:09:57.164902] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a1f40, cid 4, qid 0 00:14:05.072 [2024-07-25 13:09:57.164907] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a2240, cid 6, qid 0 00:14:05.072 [2024-07-25 13:09:57.164912] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a23c0, cid 7, qid 0 00:14:05.072 [2024-07-25 13:09:57.165074] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.072 [2024-07-25 13:09:57.165081] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.072 [2024-07-25 13:09:57.165085] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.072 [2024-07-25 13:09:57.165088] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12602c0): datao=0, datal=8192, cccid=5 00:14:05.072 [2024-07-25 13:09:57.165093] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12a20c0) on tqpair(0x12602c0): expected_datao=0, payload_size=8192 00:14:05.072 [2024-07-25 13:09:57.165097] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.072 [2024-07-25 13:09:57.165113] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.072 [2024-07-25 13:09:57.165118] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.072 [2024-07-25 13:09:57.165124] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.072 [2024-07-25 13:09:57.165129] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.072 [2024-07-25 13:09:57.165133] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.072 [2024-07-25 13:09:57.165136] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12602c0): datao=0, datal=512, cccid=4 00:14:05.072 [2024-07-25 13:09:57.165141] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12a1f40) on tqpair(0x12602c0): expected_datao=0, payload_size=512 00:14:05.072 [2024-07-25 13:09:57.165145] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.072 [2024-07-25 13:09:57.165151] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.072 [2024-07-25 13:09:57.165155] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.072 [2024-07-25 13:09:57.165160] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.072 [2024-07-25 13:09:57.165166] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.072 [2024-07-25 13:09:57.165169] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.072 [2024-07-25 13:09:57.165173] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12602c0): datao=0, datal=512, cccid=6 00:14:05.072 [2024-07-25 13:09:57.165178] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12a2240) on tqpair(0x12602c0): expected_datao=0, payload_size=512 00:14:05.072 [2024-07-25 13:09:57.165182] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.072 [2024-07-25 13:09:57.165188] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.072 [2024-07-25 13:09:57.165191] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.072 [2024-07-25 13:09:57.165197] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.072 [2024-07-25 13:09:57.165202] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.072 [2024-07-25 13:09:57.165206] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.072 [2024-07-25 13:09:57.165210] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12602c0): datao=0, datal=4096, cccid=7 00:14:05.072 [2024-07-25 13:09:57.165214] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12a23c0) on tqpair(0x12602c0): expected_datao=0, payload_size=4096 00:14:05.072 [2024-07-25 13:09:57.165218] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.072 [2024-07-25 13:09:57.165224] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.072 [2024-07-25 13:09:57.165228] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.072 [2024-07-25 13:09:57.165236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.072 [2024-07-25 13:09:57.165242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.072 [2024-07-25 13:09:57.165245] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.072 [2024-07-25 13:09:57.165249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a20c0) on tqpair=0x12602c0 00:14:05.072 [2024-07-25 13:09:57.165265] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.072 [2024-07-25 13:09:57.165272] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.072 [2024-07-25 13:09:57.165276] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.072 [2024-07-25 13:09:57.165279] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a1f40) on tqpair=0x12602c0 00:14:05.072 [2024-07-25 13:09:57.165290] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.072 [2024-07-25 13:09:57.165296] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.072 [2024-07-25 13:09:57.165300] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.072 [2024-07-25 13:09:57.165304] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a2240) on tqpair=0x12602c0 00:14:05.072 [2024-07-25 13:09:57.165311] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.072 [2024-07-25 13:09:57.165317] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.072 [2024-07-25 13:09:57.165320] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.072 ===================================================== 00:14:05.072 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:05.072 ===================================================== 00:14:05.072 Controller Capabilities/Features 00:14:05.072 ================================ 00:14:05.072 Vendor ID: 8086 00:14:05.072 Subsystem Vendor ID: 8086 00:14:05.072 Serial Number: SPDK00000000000001 00:14:05.072 Model Number: SPDK bdev Controller 00:14:05.072 Firmware Version: 24.09 00:14:05.072 Recommended Arb Burst: 6 00:14:05.072 IEEE OUI Identifier: e4 d2 5c 00:14:05.072 Multi-path I/O 00:14:05.072 May have multiple subsystem ports: Yes 00:14:05.072 May have multiple controllers: Yes 00:14:05.072 Associated with SR-IOV VF: No 00:14:05.072 Max Data Transfer Size: 131072 00:14:05.072 Max Number of Namespaces: 32 00:14:05.072 Max Number of I/O Queues: 127 00:14:05.072 NVMe Specification Version (VS): 1.3 00:14:05.072 NVMe Specification Version (Identify): 1.3 00:14:05.072 Maximum Queue Entries: 128 00:14:05.072 Contiguous Queues Required: Yes 00:14:05.072 Arbitration Mechanisms Supported 00:14:05.072 Weighted Round Robin: Not Supported 00:14:05.072 Vendor Specific: Not Supported 00:14:05.072 Reset Timeout: 15000 ms 00:14:05.072 Doorbell Stride: 4 bytes 00:14:05.072 NVM Subsystem Reset: Not Supported 00:14:05.072 Command Sets Supported 00:14:05.072 NVM Command Set: Supported 00:14:05.072 Boot Partition: Not Supported 00:14:05.072 Memory Page Size Minimum: 4096 bytes 00:14:05.072 Memory Page Size Maximum: 4096 bytes 00:14:05.072 Persistent Memory Region: Not Supported 00:14:05.072 Optional Asynchronous Events Supported 00:14:05.072 Namespace Attribute Notices: Supported 00:14:05.072 Firmware Activation Notices: Not Supported 00:14:05.072 ANA Change Notices: Not Supported 00:14:05.072 PLE Aggregate Log Change Notices: Not Supported 00:14:05.072 LBA Status Info Alert Notices: Not Supported 00:14:05.072 EGE Aggregate Log Change Notices: Not Supported 00:14:05.072 Normal NVM Subsystem Shutdown event: Not Supported 00:14:05.072 Zone Descriptor Change Notices: Not Supported 00:14:05.072 Discovery Log Change Notices: Not Supported 00:14:05.072 Controller Attributes 00:14:05.072 128-bit Host Identifier: Supported 00:14:05.072 Non-Operational Permissive Mode: Not Supported 00:14:05.072 NVM Sets: Not Supported 00:14:05.072 Read Recovery Levels: Not Supported 00:14:05.072 Endurance Groups: Not Supported 00:14:05.072 Predictable Latency Mode: Not Supported 00:14:05.072 Traffic Based Keep ALive: Not Supported 00:14:05.072 Namespace Granularity: Not Supported 00:14:05.072 SQ Associations: Not Supported 00:14:05.072 UUID List: Not Supported 00:14:05.072 Multi-Domain Subsystem: Not Supported 00:14:05.072 Fixed Capacity Management: Not Supported 00:14:05.072 Variable Capacity Management: Not Supported 00:14:05.072 Delete Endurance Group: Not Supported 00:14:05.072 Delete NVM Set: Not Supported 00:14:05.072 Extended LBA Formats Supported: Not Supported 00:14:05.072 Flexible Data Placement Supported: Not Supported 00:14:05.072 00:14:05.072 Controller Memory Buffer Support 00:14:05.072 ================================ 00:14:05.072 Supported: No 00:14:05.072 00:14:05.072 Persistent Memory Region Support 00:14:05.072 ================================ 00:14:05.072 Supported: No 00:14:05.072 00:14:05.072 Admin Command Set Attributes 00:14:05.072 ============================ 00:14:05.072 Security Send/Receive: Not Supported 00:14:05.073 Format NVM: Not Supported 00:14:05.073 Firmware Activate/Download: Not Supported 00:14:05.073 Namespace Management: Not Supported 00:14:05.073 Device Self-Test: Not Supported 00:14:05.073 Directives: Not Supported 00:14:05.073 NVMe-MI: Not Supported 00:14:05.073 Virtualization Management: Not Supported 00:14:05.073 Doorbell Buffer Config: Not Supported 00:14:05.073 Get LBA Status Capability: Not Supported 00:14:05.073 Command & Feature Lockdown Capability: Not Supported 00:14:05.073 Abort Command Limit: 4 00:14:05.073 Async Event Request Limit: 4 00:14:05.073 Number of Firmware Slots: N/A 00:14:05.073 Firmware Slot 1 Read-Only: N/A 00:14:05.073 Firmware Activation Without Reset: N/A 00:14:05.073 Multiple Update Detection Support: N/A 00:14:05.073 Firmware Update Granularity: No Information Provided 00:14:05.073 Per-Namespace SMART Log: No 00:14:05.073 Asymmetric Namespace Access Log Page: Not Supported 00:14:05.073 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:05.073 Command Effects Log Page: Supported 00:14:05.073 Get Log Page Extended Data: Supported 00:14:05.073 Telemetry Log Pages: Not Supported 00:14:05.073 Persistent Event Log Pages: Not Supported 00:14:05.073 Supported Log Pages Log Page: May Support 00:14:05.073 Commands Supported & Effects Log Page: Not Supported 00:14:05.073 Feature Identifiers & Effects Log Page:May Support 00:14:05.073 NVMe-MI Commands & Effects Log Page: May Support 00:14:05.073 Data Area 4 for Telemetry Log: Not Supported 00:14:05.073 Error Log Page Entries Supported: 128 00:14:05.073 Keep Alive: Supported 00:14:05.073 Keep Alive Granularity: 10000 ms 00:14:05.073 00:14:05.073 NVM Command Set Attributes 00:14:05.073 ========================== 00:14:05.073 Submission Queue Entry Size 00:14:05.073 Max: 64 00:14:05.073 Min: 64 00:14:05.073 Completion Queue Entry Size 00:14:05.073 Max: 16 00:14:05.073 Min: 16 00:14:05.073 Number of Namespaces: 32 00:14:05.073 Compare Command: Supported 00:14:05.073 Write Uncorrectable Command: Not Supported 00:14:05.073 Dataset Management Command: Supported 00:14:05.073 Write Zeroes Command: Supported 00:14:05.073 Set Features Save Field: Not Supported 00:14:05.073 Reservations: Supported 00:14:05.073 Timestamp: Not Supported 00:14:05.073 Copy: Supported 00:14:05.073 Volatile Write Cache: Present 00:14:05.073 Atomic Write Unit (Normal): 1 00:14:05.073 Atomic Write Unit (PFail): 1 00:14:05.073 Atomic Compare & Write Unit: 1 00:14:05.073 Fused Compare & Write: Supported 00:14:05.073 Scatter-Gather List 00:14:05.073 SGL Command Set: Supported 00:14:05.073 SGL Keyed: Supported 00:14:05.073 SGL Bit Bucket Descriptor: Not Supported 00:14:05.073 SGL Metadata Pointer: Not Supported 00:14:05.073 Oversized SGL: Not Supported 00:14:05.073 SGL Metadata Address: Not Supported 00:14:05.073 SGL Offset: Supported 00:14:05.073 Transport SGL Data Block: Not Supported 00:14:05.073 Replay Protected Memory Block: Not Supported 00:14:05.073 00:14:05.073 Firmware Slot Information 00:14:05.073 ========================= 00:14:05.073 Active slot: 1 00:14:05.073 Slot 1 Firmware Revision: 24.09 00:14:05.073 00:14:05.073 00:14:05.073 Commands Supported and Effects 00:14:05.073 ============================== 00:14:05.073 Admin Commands 00:14:05.073 -------------- 00:14:05.073 Get Log Page (02h): Supported 00:14:05.073 Identify (06h): Supported 00:14:05.073 Abort (08h): Supported 00:14:05.073 Set Features (09h): Supported 00:14:05.073 Get Features (0Ah): Supported 00:14:05.073 Asynchronous Event Request (0Ch): Supported 00:14:05.073 Keep Alive (18h): Supported 00:14:05.073 I/O Commands 00:14:05.073 ------------ 00:14:05.073 Flush (00h): Supported LBA-Change 00:14:05.073 Write (01h): Supported LBA-Change 00:14:05.073 Read (02h): Supported 00:14:05.073 Compare (05h): Supported 00:14:05.073 Write Zeroes (08h): Supported LBA-Change 00:14:05.073 Dataset Management (09h): Supported LBA-Change 00:14:05.073 Copy (19h): Supported LBA-Change 00:14:05.073 00:14:05.073 Error Log 00:14:05.073 ========= 00:14:05.073 00:14:05.073 Arbitration 00:14:05.073 =========== 00:14:05.073 Arbitration Burst: 1 00:14:05.073 00:14:05.073 Power Management 00:14:05.073 ================ 00:14:05.073 Number of Power States: 1 00:14:05.073 Current Power State: Power State #0 00:14:05.073 Power State #0: 00:14:05.073 Max Power: 0.00 W 00:14:05.073 Non-Operational State: Operational 00:14:05.073 Entry Latency: Not Reported 00:14:05.073 Exit Latency: Not Reported 00:14:05.073 Relative Read Throughput: 0 00:14:05.073 Relative Read Latency: 0 00:14:05.073 Relative Write Throughput: 0 00:14:05.073 Relative Write Latency: 0 00:14:05.073 Idle Power: Not Reported 00:14:05.073 Active Power: Not Reported 00:14:05.073 Non-Operational Permissive Mode: Not Supported 00:14:05.073 00:14:05.073 Health Information 00:14:05.073 ================== 00:14:05.073 Critical Warnings: 00:14:05.073 Available Spare Space: OK 00:14:05.073 Temperature: OK 00:14:05.073 Device Reliability: OK 00:14:05.073 Read Only: No 00:14:05.073 Volatile Memory Backup: OK 00:14:05.073 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:05.073 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:05.073 Available Spare: 0% 00:14:05.073 Available Spare Threshold: 0% 00:14:05.073 Life Percentage Used:[2024-07-25 13:09:57.165324] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a23c0) on tqpair=0x12602c0 00:14:05.073 [2024-07-25 13:09:57.165420] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.073 [2024-07-25 13:09:57.165427] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x12602c0) 00:14:05.073 [2024-07-25 13:09:57.165435] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.073 [2024-07-25 13:09:57.165457] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a23c0, cid 7, qid 0 00:14:05.073 [2024-07-25 13:09:57.165503] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.073 [2024-07-25 13:09:57.165510] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.073 [2024-07-25 13:09:57.165514] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.073 [2024-07-25 13:09:57.165518] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a23c0) on tqpair=0x12602c0 00:14:05.073 [2024-07-25 13:09:57.165556] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:14:05.073 [2024-07-25 13:09:57.165567] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a1940) on tqpair=0x12602c0 00:14:05.073 [2024-07-25 13:09:57.165573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.073 [2024-07-25 13:09:57.165578] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a1ac0) on tqpair=0x12602c0 00:14:05.073 [2024-07-25 13:09:57.165583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.073 [2024-07-25 13:09:57.165588] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a1c40) on tqpair=0x12602c0 00:14:05.073 [2024-07-25 13:09:57.165592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.073 [2024-07-25 13:09:57.165597] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a1dc0) on tqpair=0x12602c0 00:14:05.073 [2024-07-25 13:09:57.165602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.074 [2024-07-25 13:09:57.165610] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.074 [2024-07-25 13:09:57.165615] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.074 [2024-07-25 13:09:57.165618] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12602c0) 00:14:05.074 [2024-07-25 13:09:57.165626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.074 [2024-07-25 13:09:57.165647] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a1dc0, cid 3, qid 0 00:14:05.074 [2024-07-25 13:09:57.165688] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.074 [2024-07-25 13:09:57.165695] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.074 [2024-07-25 13:09:57.165698] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.074 [2024-07-25 13:09:57.165702] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a1dc0) on tqpair=0x12602c0 00:14:05.074 [2024-07-25 13:09:57.165710] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.074 [2024-07-25 13:09:57.165714] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.074 [2024-07-25 13:09:57.165718] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12602c0) 00:14:05.074 [2024-07-25 13:09:57.165725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.074 [2024-07-25 13:09:57.165746] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a1dc0, cid 3, qid 0 00:14:05.074 [2024-07-25 13:09:57.165808] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.074 [2024-07-25 13:09:57.165815] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.074 [2024-07-25 13:09:57.165818] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.074 [2024-07-25 13:09:57.165822] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a1dc0) on tqpair=0x12602c0 00:14:05.074 [2024-07-25 13:09:57.165827] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:14:05.074 [2024-07-25 13:09:57.165832] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:14:05.074 [2024-07-25 13:09:57.165841] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.074 [2024-07-25 13:09:57.165845] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.074 [2024-07-25 13:09:57.165849] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12602c0) 00:14:05.074 [2024-07-25 13:09:57.165856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.074 [2024-07-25 13:09:57.169957] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12a1dc0, cid 3, qid 0 00:14:05.074 [2024-07-25 13:09:57.170005] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.074 [2024-07-25 13:09:57.170013] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.074 [2024-07-25 13:09:57.170016] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.074 [2024-07-25 13:09:57.170020] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12a1dc0) on tqpair=0x12602c0 00:14:05.074 [2024-07-25 13:09:57.170029] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:14:05.074 0% 00:14:05.074 Data Units Read: 0 00:14:05.074 Data Units Written: 0 00:14:05.074 Host Read Commands: 0 00:14:05.074 Host Write Commands: 0 00:14:05.074 Controller Busy Time: 0 minutes 00:14:05.074 Power Cycles: 0 00:14:05.074 Power On Hours: 0 hours 00:14:05.074 Unsafe Shutdowns: 0 00:14:05.074 Unrecoverable Media Errors: 0 00:14:05.074 Lifetime Error Log Entries: 0 00:14:05.074 Warning Temperature Time: 0 minutes 00:14:05.074 Critical Temperature Time: 0 minutes 00:14:05.074 00:14:05.074 Number of Queues 00:14:05.074 ================ 00:14:05.074 Number of I/O Submission Queues: 127 00:14:05.074 Number of I/O Completion Queues: 127 00:14:05.074 00:14:05.074 Active Namespaces 00:14:05.074 ================= 00:14:05.074 Namespace ID:1 00:14:05.074 Error Recovery Timeout: Unlimited 00:14:05.074 Command Set Identifier: NVM (00h) 00:14:05.074 Deallocate: Supported 00:14:05.074 Deallocated/Unwritten Error: Not Supported 00:14:05.074 Deallocated Read Value: Unknown 00:14:05.074 Deallocate in Write Zeroes: Not Supported 00:14:05.074 Deallocated Guard Field: 0xFFFF 00:14:05.074 Flush: Supported 00:14:05.074 Reservation: Supported 00:14:05.074 Namespace Sharing Capabilities: Multiple Controllers 00:14:05.074 Size (in LBAs): 131072 (0GiB) 00:14:05.074 Capacity (in LBAs): 131072 (0GiB) 00:14:05.074 Utilization (in LBAs): 131072 (0GiB) 00:14:05.074 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:05.074 EUI64: ABCDEF0123456789 00:14:05.074 UUID: 76482b55-8252-43e6-9b40-611464732ca7 00:14:05.074 Thin Provisioning: Not Supported 00:14:05.074 Per-NS Atomic Units: Yes 00:14:05.074 Atomic Boundary Size (Normal): 0 00:14:05.074 Atomic Boundary Size (PFail): 0 00:14:05.074 Atomic Boundary Offset: 0 00:14:05.074 Maximum Single Source Range Length: 65535 00:14:05.074 Maximum Copy Length: 65535 00:14:05.074 Maximum Source Range Count: 1 00:14:05.074 NGUID/EUI64 Never Reused: No 00:14:05.074 Namespace Write Protected: No 00:14:05.074 Number of LBA Formats: 1 00:14:05.074 Current LBA Format: LBA Format #00 00:14:05.074 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:05.074 00:14:05.074 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:14:05.074 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:05.074 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.074 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:05.074 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.074 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:05.074 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:14:05.074 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:05.074 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:14:05.074 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:05.074 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:14:05.074 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:05.074 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:05.074 rmmod nvme_tcp 00:14:05.074 rmmod nvme_fabrics 00:14:05.332 rmmod nvme_keyring 00:14:05.332 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:05.332 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:14:05.332 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:14:05.332 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 73789 ']' 00:14:05.332 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 73789 00:14:05.332 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 73789 ']' 00:14:05.332 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 73789 00:14:05.332 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:14:05.332 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:05.332 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73789 00:14:05.332 killing process with pid 73789 00:14:05.332 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:05.332 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:05.332 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73789' 00:14:05.332 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 73789 00:14:05.332 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 73789 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:05.590 00:14:05.590 real 0m2.516s 00:14:05.590 user 0m6.968s 00:14:05.590 sys 0m0.642s 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:05.590 ************************************ 00:14:05.590 END TEST nvmf_identify 00:14:05.590 ************************************ 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:05.590 ************************************ 00:14:05.590 START TEST nvmf_perf 00:14:05.590 ************************************ 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:05.590 * Looking for test storage... 00:14:05.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.590 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:05.591 Cannot find device "nvmf_tgt_br" 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # true 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:05.591 Cannot find device "nvmf_tgt_br2" 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # true 00:14:05.591 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:05.850 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:05.850 Cannot find device "nvmf_tgt_br" 00:14:05.850 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # true 00:14:05.850 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:05.850 Cannot find device "nvmf_tgt_br2" 00:14:05.850 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # true 00:14:05.850 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:05.850 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:05.850 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:05.850 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:05.850 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:14:05.850 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:05.850 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:05.850 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:14:05.850 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:05.850 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:05.850 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:05.850 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:05.850 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:05.850 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:05.850 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:05.850 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:05.850 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:05.850 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:05.850 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:05.850 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:05.850 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:05.850 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:05.850 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:05.850 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:05.850 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:05.850 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:05.850 13:09:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:05.850 13:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:05.850 13:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:05.850 13:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:05.850 13:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:05.850 13:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:06.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:06.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:14:06.109 00:14:06.109 --- 10.0.0.2 ping statistics --- 00:14:06.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.109 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:06.109 13:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:06.109 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:06.109 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:14:06.109 00:14:06.109 --- 10.0.0.3 ping statistics --- 00:14:06.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.109 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:14:06.109 13:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:06.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:06.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:14:06.109 00:14:06.109 --- 10.0.0.1 ping statistics --- 00:14:06.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.109 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:14:06.109 13:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:06.109 13:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:14:06.109 13:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:06.109 13:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:06.109 13:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:06.109 13:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:06.109 13:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:06.109 13:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:06.109 13:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:06.109 13:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:06.109 13:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:06.109 13:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:06.109 13:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:06.109 13:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=73994 00:14:06.109 13:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:06.109 13:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 73994 00:14:06.109 13:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 73994 ']' 00:14:06.109 13:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.109 13:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:06.109 13:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.109 13:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:06.109 13:09:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:06.109 [2024-07-25 13:09:58.134065] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:06.109 [2024-07-25 13:09:58.134164] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.109 [2024-07-25 13:09:58.274093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:06.367 [2024-07-25 13:09:58.357018] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.367 [2024-07-25 13:09:58.357102] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.367 [2024-07-25 13:09:58.357114] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:06.367 [2024-07-25 13:09:58.357122] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:06.367 [2024-07-25 13:09:58.357130] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.367 [2024-07-25 13:09:58.357550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.367 [2024-07-25 13:09:58.357965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.367 [2024-07-25 13:09:58.358031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:06.367 [2024-07-25 13:09:58.358036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.367 [2024-07-25 13:09:58.414706] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:06.933 13:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:06.933 13:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:14:06.933 13:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:06.933 13:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:06.933 13:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:07.190 13:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.190 13:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:07.190 13:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:07.448 13:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:07.448 13:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:07.706 13:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:14:07.706 13:09:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:07.963 13:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:07.963 13:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:14:07.963 13:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:07.963 13:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:07.963 13:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:08.220 [2024-07-25 13:10:00.230967] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:08.220 13:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:08.478 13:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:08.478 13:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:08.478 13:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:08.478 13:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:14:08.736 13:10:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:08.994 [2024-07-25 13:10:01.151998] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.994 13:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:09.252 13:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:14:09.252 13:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:09.252 13:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:14:09.252 13:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:10.629 Initializing NVMe Controllers 00:14:10.629 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:10.629 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:10.629 Initialization complete. Launching workers. 00:14:10.629 ======================================================== 00:14:10.629 Latency(us) 00:14:10.629 Device Information : IOPS MiB/s Average min max 00:14:10.629 PCIE (0000:00:10.0) NSID 1 from core 0: 22747.88 88.86 1407.00 395.05 8452.95 00:14:10.629 ======================================================== 00:14:10.629 Total : 22747.88 88.86 1407.00 395.05 8452.95 00:14:10.629 00:14:10.629 13:10:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:11.565 Initializing NVMe Controllers 00:14:11.565 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:11.565 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:11.565 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:11.565 Initialization complete. Launching workers. 00:14:11.565 ======================================================== 00:14:11.565 Latency(us) 00:14:11.565 Device Information : IOPS MiB/s Average min max 00:14:11.565 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4177.34 16.32 239.10 90.68 6172.90 00:14:11.565 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.51 0.48 8160.52 6046.86 12044.35 00:14:11.565 ======================================================== 00:14:11.565 Total : 4300.85 16.80 466.58 90.68 12044.35 00:14:11.565 00:14:11.824 13:10:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:13.229 Initializing NVMe Controllers 00:14:13.229 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:13.229 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:13.229 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:13.229 Initialization complete. Launching workers. 00:14:13.229 ======================================================== 00:14:13.229 Latency(us) 00:14:13.229 Device Information : IOPS MiB/s Average min max 00:14:13.229 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9472.00 37.00 3379.35 457.66 8102.93 00:14:13.229 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4000.00 15.62 8045.99 6360.75 12597.03 00:14:13.229 ======================================================== 00:14:13.229 Total : 13472.00 52.62 4764.93 457.66 12597.03 00:14:13.229 00:14:13.229 13:10:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:14:13.229 13:10:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:15.763 Initializing NVMe Controllers 00:14:15.763 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:15.763 Controller IO queue size 128, less than required. 00:14:15.763 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:15.763 Controller IO queue size 128, less than required. 00:14:15.763 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:15.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:15.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:15.763 Initialization complete. Launching workers. 00:14:15.763 ======================================================== 00:14:15.763 Latency(us) 00:14:15.763 Device Information : IOPS MiB/s Average min max 00:14:15.763 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1890.42 472.61 68828.94 39182.76 96490.23 00:14:15.763 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 708.47 177.12 188983.45 32221.36 322862.08 00:14:15.763 ======================================================== 00:14:15.763 Total : 2598.89 649.72 101583.65 32221.36 322862.08 00:14:15.763 00:14:15.763 13:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:14:16.022 Initializing NVMe Controllers 00:14:16.022 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:16.022 Controller IO queue size 128, less than required. 00:14:16.022 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:16.022 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:14:16.022 Controller IO queue size 128, less than required. 00:14:16.022 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:16.022 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:14:16.022 WARNING: Some requested NVMe devices were skipped 00:14:16.022 No valid NVMe controllers or AIO or URING devices found 00:14:16.022 13:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:14:18.555 Initializing NVMe Controllers 00:14:18.555 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:18.555 Controller IO queue size 128, less than required. 00:14:18.555 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:18.555 Controller IO queue size 128, less than required. 00:14:18.555 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:18.555 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:18.555 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:18.555 Initialization complete. Launching workers. 00:14:18.555 00:14:18.555 ==================== 00:14:18.555 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:14:18.555 TCP transport: 00:14:18.555 polls: 11997 00:14:18.555 idle_polls: 8771 00:14:18.555 sock_completions: 3226 00:14:18.555 nvme_completions: 6143 00:14:18.555 submitted_requests: 9210 00:14:18.555 queued_requests: 1 00:14:18.555 00:14:18.555 ==================== 00:14:18.555 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:14:18.555 TCP transport: 00:14:18.555 polls: 12192 00:14:18.555 idle_polls: 7734 00:14:18.555 sock_completions: 4458 00:14:18.555 nvme_completions: 7085 00:14:18.555 submitted_requests: 10718 00:14:18.555 queued_requests: 1 00:14:18.555 ======================================================== 00:14:18.555 Latency(us) 00:14:18.555 Device Information : IOPS MiB/s Average min max 00:14:18.555 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1535.34 383.84 84706.99 39843.30 137823.40 00:14:18.555 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1770.82 442.70 72563.89 40895.10 105284.43 00:14:18.555 ======================================================== 00:14:18.555 Total : 3306.16 826.54 78203.00 39843.30 137823.40 00:14:18.555 00:14:18.555 13:10:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:14:18.555 13:10:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:18.814 13:10:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:14:18.814 13:10:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:18.814 13:10:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:14:18.814 13:10:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:18.814 13:10:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:14:18.814 13:10:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:18.814 13:10:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:14:18.814 13:10:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:18.814 13:10:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:18.814 rmmod nvme_tcp 00:14:18.814 rmmod nvme_fabrics 00:14:18.814 rmmod nvme_keyring 00:14:18.814 13:10:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:18.814 13:10:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:14:18.814 13:10:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:14:18.814 13:10:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 73994 ']' 00:14:18.814 13:10:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 73994 00:14:18.814 13:10:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 73994 ']' 00:14:18.814 13:10:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 73994 00:14:18.814 13:10:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:14:18.814 13:10:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:18.814 13:10:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73994 00:14:18.814 killing process with pid 73994 00:14:18.814 13:10:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:18.814 13:10:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:18.814 13:10:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73994' 00:14:18.814 13:10:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 73994 00:14:18.814 13:10:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 73994 00:14:19.383 13:10:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:19.383 13:10:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:19.383 13:10:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:19.383 13:10:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:19.383 13:10:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:19.383 13:10:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.383 13:10:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:19.383 13:10:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.383 13:10:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:19.383 00:14:19.383 real 0m13.806s 00:14:19.383 user 0m50.763s 00:14:19.383 sys 0m3.998s 00:14:19.383 13:10:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:19.383 13:10:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:19.383 ************************************ 00:14:19.383 END TEST nvmf_perf 00:14:19.383 ************************************ 00:14:19.383 13:10:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:19.383 13:10:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:19.383 13:10:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:19.383 13:10:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:19.383 ************************************ 00:14:19.383 START TEST nvmf_fio_host 00:14:19.383 ************************************ 00:14:19.383 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:19.383 * Looking for test storage... 00:14:19.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:19.383 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:19.383 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.383 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.383 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.383 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.383 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.383 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.383 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:19.383 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.383 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:19.383 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:19.643 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:19.644 Cannot find device "nvmf_tgt_br" 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:19.644 Cannot find device "nvmf_tgt_br2" 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:19.644 Cannot find device "nvmf_tgt_br" 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:19.644 Cannot find device "nvmf_tgt_br2" 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:19.644 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:19.644 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:19.644 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:19.903 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:19.903 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:19.903 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:19.903 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:19.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:19.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:14:19.903 00:14:19.903 --- 10.0.0.2 ping statistics --- 00:14:19.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.904 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:14:19.904 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:19.904 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:19.904 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:14:19.904 00:14:19.904 --- 10.0.0.3 ping statistics --- 00:14:19.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.904 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:14:19.904 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:19.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:19.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:14:19.904 00:14:19.904 --- 10.0.0.1 ping statistics --- 00:14:19.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.904 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:14:19.904 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:19.904 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:14:19.904 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:19.904 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:19.904 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:19.904 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:19.904 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:19.904 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:19.904 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:19.904 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:14:19.904 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:14:19.904 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:19.904 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:19.904 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74399 00:14:19.904 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:19.904 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:19.904 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74399 00:14:19.904 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 74399 ']' 00:14:19.904 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.904 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:19.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.904 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.904 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:19.904 13:10:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:19.904 [2024-07-25 13:10:11.955984] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:19.904 [2024-07-25 13:10:11.956081] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.904 [2024-07-25 13:10:12.091502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:20.163 [2024-07-25 13:10:12.167062] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:20.163 [2024-07-25 13:10:12.167120] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:20.163 [2024-07-25 13:10:12.167146] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:20.163 [2024-07-25 13:10:12.167153] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:20.163 [2024-07-25 13:10:12.167159] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:20.163 [2024-07-25 13:10:12.167571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.163 [2024-07-25 13:10:12.167716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:20.163 [2024-07-25 13:10:12.167787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:20.163 [2024-07-25 13:10:12.167792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.163 [2024-07-25 13:10:12.219533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:20.731 13:10:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:20.731 13:10:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:14:20.731 13:10:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:20.990 [2024-07-25 13:10:13.063591] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:20.990 13:10:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:14:20.990 13:10:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:20.990 13:10:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:20.990 13:10:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:21.253 Malloc1 00:14:21.253 13:10:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:21.513 13:10:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:22.078 13:10:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:22.078 [2024-07-25 13:10:14.184632] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:22.078 13:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:22.337 13:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:14:22.337 13:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:22.337 13:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:22.337 13:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:22.337 13:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:22.337 13:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:22.337 13:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:22.337 13:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:14:22.337 13:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:22.337 13:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:22.337 13:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:22.337 13:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:14:22.337 13:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:22.337 13:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:22.337 13:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:22.337 13:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:22.337 13:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:14:22.337 13:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:22.337 13:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:22.337 13:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:22.337 13:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:22.337 13:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:22.337 13:10:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:22.595 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:22.595 fio-3.35 00:14:22.595 Starting 1 thread 00:14:25.129 00:14:25.129 test: (groupid=0, jobs=1): err= 0: pid=74471: Thu Jul 25 13:10:16 2024 00:14:25.129 read: IOPS=9667, BW=37.8MiB/s (39.6MB/s)(75.8MiB/2006msec) 00:14:25.129 slat (nsec): min=1760, max=390143, avg=2214.32, stdev=3696.09 00:14:25.129 clat (usec): min=2261, max=12354, avg=6888.56, stdev=541.96 00:14:25.129 lat (usec): min=2309, max=12356, avg=6890.78, stdev=541.75 00:14:25.129 clat percentiles (usec): 00:14:25.129 | 1.00th=[ 5866], 5.00th=[ 6128], 10.00th=[ 6325], 20.00th=[ 6456], 00:14:25.129 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 6980], 00:14:25.129 | 70.00th=[ 7111], 80.00th=[ 7308], 90.00th=[ 7504], 95.00th=[ 7767], 00:14:25.129 | 99.00th=[ 8225], 99.50th=[ 8455], 99.90th=[11338], 99.95th=[11863], 00:14:25.129 | 99.99th=[12256] 00:14:25.129 bw ( KiB/s): min=37920, max=39320, per=99.97%, avg=38660.00, stdev=598.97, samples=4 00:14:25.129 iops : min= 9480, max= 9830, avg=9665.00, stdev=149.74, samples=4 00:14:25.129 write: IOPS=9678, BW=37.8MiB/s (39.6MB/s)(75.8MiB/2006msec); 0 zone resets 00:14:25.129 slat (nsec): min=1812, max=863638, avg=2316.23, stdev=6491.66 00:14:25.129 clat (usec): min=2147, max=12043, avg=6287.51, stdev=492.97 00:14:25.129 lat (usec): min=2159, max=12045, avg=6289.82, stdev=492.98 00:14:25.129 clat percentiles (usec): 00:14:25.129 | 1.00th=[ 5342], 5.00th=[ 5604], 10.00th=[ 5735], 20.00th=[ 5932], 00:14:25.129 | 30.00th=[ 6063], 40.00th=[ 6128], 50.00th=[ 6259], 60.00th=[ 6325], 00:14:25.129 | 70.00th=[ 6456], 80.00th=[ 6652], 90.00th=[ 6849], 95.00th=[ 7046], 00:14:25.129 | 99.00th=[ 7504], 99.50th=[ 7701], 99.90th=[10814], 99.95th=[11338], 00:14:25.129 | 99.99th=[11731] 00:14:25.129 bw ( KiB/s): min=38472, max=39168, per=99.97%, avg=38706.00, stdev=317.48, samples=4 00:14:25.129 iops : min= 9618, max= 9792, avg=9676.50, stdev=79.37, samples=4 00:14:25.129 lat (msec) : 4=0.12%, 10=99.73%, 20=0.15% 00:14:25.129 cpu : usr=66.33%, sys=25.44%, ctx=43, majf=0, minf=7 00:14:25.129 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:25.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:25.129 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:25.129 issued rwts: total=19394,19416,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:25.129 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:25.129 00:14:25.129 Run status group 0 (all jobs): 00:14:25.129 READ: bw=37.8MiB/s (39.6MB/s), 37.8MiB/s-37.8MiB/s (39.6MB/s-39.6MB/s), io=75.8MiB (79.4MB), run=2006-2006msec 00:14:25.129 WRITE: bw=37.8MiB/s (39.6MB/s), 37.8MiB/s-37.8MiB/s (39.6MB/s-39.6MB/s), io=75.8MiB (79.5MB), run=2006-2006msec 00:14:25.129 13:10:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:25.129 13:10:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:25.129 13:10:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:25.129 13:10:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:25.129 13:10:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:25.129 13:10:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:25.129 13:10:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:14:25.129 13:10:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:25.129 13:10:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:25.129 13:10:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:25.129 13:10:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:14:25.129 13:10:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:25.129 13:10:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:25.129 13:10:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:25.129 13:10:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:25.129 13:10:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:25.129 13:10:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:14:25.129 13:10:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:25.129 13:10:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:25.129 13:10:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:25.129 13:10:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:25.129 13:10:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:25.129 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:14:25.129 fio-3.35 00:14:25.129 Starting 1 thread 00:14:27.702 00:14:27.702 test: (groupid=0, jobs=1): err= 0: pid=74520: Thu Jul 25 13:10:19 2024 00:14:27.702 read: IOPS=8924, BW=139MiB/s (146MB/s)(280MiB/2008msec) 00:14:27.702 slat (usec): min=2, max=135, avg= 3.67, stdev= 2.36 00:14:27.702 clat (usec): min=2225, max=16528, avg=7956.22, stdev=2610.50 00:14:27.702 lat (usec): min=2228, max=16531, avg=7959.89, stdev=2610.64 00:14:27.702 clat percentiles (usec): 00:14:27.702 | 1.00th=[ 3720], 5.00th=[ 4490], 10.00th=[ 4948], 20.00th=[ 5669], 00:14:27.702 | 30.00th=[ 6259], 40.00th=[ 6849], 50.00th=[ 7504], 60.00th=[ 8225], 00:14:27.702 | 70.00th=[ 9110], 80.00th=[10159], 90.00th=[11731], 95.00th=[13042], 00:14:27.702 | 99.00th=[14877], 99.50th=[15270], 99.90th=[16450], 99.95th=[16450], 00:14:27.702 | 99.99th=[16581] 00:14:27.702 bw ( KiB/s): min=68192, max=72558, per=49.13%, avg=70155.50, stdev=1871.05, samples=4 00:14:27.702 iops : min= 4262, max= 4534, avg=4384.50, stdev=116.57, samples=4 00:14:27.702 write: IOPS=5091, BW=79.6MiB/s (83.4MB/s)(144MiB/1807msec); 0 zone resets 00:14:27.702 slat (usec): min=31, max=361, avg=36.96, stdev= 9.46 00:14:27.702 clat (usec): min=4370, max=19397, avg=11422.92, stdev=2154.10 00:14:27.702 lat (usec): min=4406, max=19430, avg=11459.88, stdev=2156.26 00:14:27.702 clat percentiles (usec): 00:14:27.702 | 1.00th=[ 7242], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9634], 00:14:27.702 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11076], 60.00th=[11600], 00:14:27.702 | 70.00th=[12256], 80.00th=[13173], 90.00th=[14484], 95.00th=[15401], 00:14:27.702 | 99.00th=[17171], 99.50th=[17695], 99.90th=[19006], 99.95th=[19268], 00:14:27.702 | 99.99th=[19268] 00:14:27.702 bw ( KiB/s): min=69856, max=75560, per=89.71%, avg=73082.00, stdev=2428.17, samples=4 00:14:27.702 iops : min= 4366, max= 4722, avg=4567.50, stdev=151.59, samples=4 00:14:27.702 lat (msec) : 4=1.32%, 10=60.01%, 20=38.67% 00:14:27.702 cpu : usr=81.46%, sys=13.70%, ctx=19, majf=0, minf=6 00:14:27.702 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:14:27.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:27.702 issued rwts: total=17920,9200,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.702 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:27.702 00:14:27.702 Run status group 0 (all jobs): 00:14:27.702 READ: bw=139MiB/s (146MB/s), 139MiB/s-139MiB/s (146MB/s-146MB/s), io=280MiB (294MB), run=2008-2008msec 00:14:27.702 WRITE: bw=79.6MiB/s (83.4MB/s), 79.6MiB/s-79.6MiB/s (83.4MB/s-83.4MB/s), io=144MiB (151MB), run=1807-1807msec 00:14:27.702 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:27.702 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:14:27.702 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:14:27.702 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:14:27.702 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:14:27.702 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:27.702 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:14:27.702 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:27.702 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:14:27.702 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:27.702 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:27.702 rmmod nvme_tcp 00:14:27.702 rmmod nvme_fabrics 00:14:27.702 rmmod nvme_keyring 00:14:27.702 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:27.702 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:14:27.702 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:14:27.702 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 74399 ']' 00:14:27.702 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 74399 00:14:27.702 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 74399 ']' 00:14:27.702 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 74399 00:14:27.702 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:14:27.702 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:27.702 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74399 00:14:27.702 killing process with pid 74399 00:14:27.702 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:27.702 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:27.702 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74399' 00:14:27.702 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 74399 00:14:27.702 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 74399 00:14:27.961 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:27.961 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:27.961 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:27.961 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:27.961 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:27.961 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.961 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:27.961 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.961 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:27.961 00:14:27.961 real 0m8.454s 00:14:27.961 user 0m34.872s 00:14:27.961 sys 0m2.394s 00:14:27.961 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:27.961 ************************************ 00:14:27.961 END TEST nvmf_fio_host 00:14:27.961 ************************************ 00:14:27.961 13:10:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:27.961 13:10:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:27.961 13:10:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:27.962 13:10:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:27.962 13:10:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:27.962 ************************************ 00:14:27.962 START TEST nvmf_failover 00:14:27.962 ************************************ 00:14:27.962 13:10:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:27.962 * Looking for test storage... 00:14:27.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:27.962 Cannot find device "nvmf_tgt_br" 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # true 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:27.962 Cannot find device "nvmf_tgt_br2" 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # true 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:27.962 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:28.221 Cannot find device "nvmf_tgt_br" 00:14:28.221 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # true 00:14:28.221 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:28.221 Cannot find device "nvmf_tgt_br2" 00:14:28.221 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # true 00:14:28.221 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:28.221 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:28.221 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:28.221 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:28.221 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:14:28.221 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:28.221 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:28.221 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:14:28.221 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:28.221 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:28.221 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:28.221 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:28.221 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:28.221 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:28.221 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:28.221 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:28.221 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:28.221 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:28.222 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:28.222 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:28.222 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:28.222 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:28.222 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:28.222 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:28.222 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:28.222 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:28.222 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:28.222 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:28.481 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:28.481 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:28.481 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:28.481 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:28.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:28.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:14:28.481 00:14:28.481 --- 10.0.0.2 ping statistics --- 00:14:28.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.481 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:14:28.481 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:28.481 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:28.481 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:14:28.481 00:14:28.481 --- 10.0.0.3 ping statistics --- 00:14:28.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.481 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:14:28.481 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:28.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:28.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:14:28.481 00:14:28.481 --- 10.0.0.1 ping statistics --- 00:14:28.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.481 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:14:28.481 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:28.481 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:14:28.481 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:28.481 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:28.481 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:28.481 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:28.481 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:28.481 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:28.481 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:28.481 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:14:28.481 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:28.481 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:28.481 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:28.481 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=74730 00:14:28.481 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 74730 00:14:28.481 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:28.481 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 74730 ']' 00:14:28.481 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.481 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:28.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.481 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.481 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:28.481 13:10:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:28.481 [2024-07-25 13:10:20.526600] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:28.481 [2024-07-25 13:10:20.526703] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.481 [2024-07-25 13:10:20.666721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:28.740 [2024-07-25 13:10:20.759439] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:28.740 [2024-07-25 13:10:20.759514] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:28.740 [2024-07-25 13:10:20.759526] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:28.740 [2024-07-25 13:10:20.759534] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:28.740 [2024-07-25 13:10:20.759541] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:28.740 [2024-07-25 13:10:20.759691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:28.740 [2024-07-25 13:10:20.760417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:28.740 [2024-07-25 13:10:20.760427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.740 [2024-07-25 13:10:20.815468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:29.307 13:10:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:29.307 13:10:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:14:29.307 13:10:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:29.307 13:10:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:29.307 13:10:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:29.307 13:10:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:29.307 13:10:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:29.566 [2024-07-25 13:10:21.649662] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:29.566 13:10:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:29.825 Malloc0 00:14:29.825 13:10:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:30.084 13:10:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:30.343 13:10:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:30.602 [2024-07-25 13:10:22.680538] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.602 13:10:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:30.861 [2024-07-25 13:10:22.888680] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:30.861 13:10:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:31.120 [2024-07-25 13:10:23.108952] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:14:31.120 13:10:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=74792 00:14:31.120 13:10:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:14:31.120 13:10:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:31.120 13:10:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 74792 /var/tmp/bdevperf.sock 00:14:31.120 13:10:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 74792 ']' 00:14:31.120 13:10:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:31.120 13:10:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:31.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:31.120 13:10:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:31.120 13:10:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:31.120 13:10:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:32.058 13:10:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:32.058 13:10:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:14:32.058 13:10:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:32.317 NVMe0n1 00:14:32.317 13:10:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:32.576 00:14:32.576 13:10:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=74811 00:14:32.576 13:10:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:32.576 13:10:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:14:33.512 13:10:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:33.771 13:10:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:14:37.058 13:10:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:37.316 00:14:37.316 13:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:37.575 [2024-07-25 13:10:29.545177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6e9d0 is same with the state(5) to be set 00:14:37.575 13:10:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:14:40.891 13:10:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:40.891 [2024-07-25 13:10:32.817809] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:40.891 13:10:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:14:41.825 13:10:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:42.083 13:10:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 74811 00:14:48.652 0 00:14:48.652 13:10:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 74792 00:14:48.652 13:10:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 74792 ']' 00:14:48.652 13:10:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 74792 00:14:48.652 13:10:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:14:48.652 13:10:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:48.652 13:10:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74792 00:14:48.652 killing process with pid 74792 00:14:48.652 13:10:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:48.652 13:10:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:48.652 13:10:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74792' 00:14:48.652 13:10:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 74792 00:14:48.652 13:10:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 74792 00:14:48.652 13:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:48.652 [2024-07-25 13:10:23.185918] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:48.652 [2024-07-25 13:10:23.186021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74792 ] 00:14:48.653 [2024-07-25 13:10:23.332175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.653 [2024-07-25 13:10:23.437776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.653 [2024-07-25 13:10:23.496896] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:48.653 Running I/O for 15 seconds... 00:14:48.653 [2024-07-25 13:10:25.937898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.653 [2024-07-25 13:10:25.937994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.938024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.653 [2024-07-25 13:10:25.938040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.938055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.653 [2024-07-25 13:10:25.938069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.938085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.653 [2024-07-25 13:10:25.938099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.938113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.653 [2024-07-25 13:10:25.938126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.938141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.653 [2024-07-25 13:10:25.938155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.938186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.653 [2024-07-25 13:10:25.938200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.938215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.653 [2024-07-25 13:10:25.938229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.938249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.653 [2024-07-25 13:10:25.938262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.938277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.653 [2024-07-25 13:10:25.938289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.938304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.653 [2024-07-25 13:10:25.938354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.938371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.653 [2024-07-25 13:10:25.938385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.938399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.653 [2024-07-25 13:10:25.938412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.938427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.653 [2024-07-25 13:10:25.938440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.938456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.653 [2024-07-25 13:10:25.938469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.938483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.653 [2024-07-25 13:10:25.938496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.938511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.653 [2024-07-25 13:10:25.938525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.938540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.653 [2024-07-25 13:10:25.938568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.938582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.653 [2024-07-25 13:10:25.938596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.938612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.653 [2024-07-25 13:10:25.938625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.938641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.653 [2024-07-25 13:10:25.938654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.938668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.653 [2024-07-25 13:10:25.938680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.938695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.653 [2024-07-25 13:10:25.938707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.938729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.653 [2024-07-25 13:10:25.938742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.938757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.653 [2024-07-25 13:10:25.938770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.938784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.653 [2024-07-25 13:10:25.938796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.938811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.653 [2024-07-25 13:10:25.938823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.938837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.653 [2024-07-25 13:10:25.938850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.938885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.653 [2024-07-25 13:10:25.938899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.938928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.653 [2024-07-25 13:10:25.938941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.938955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.653 [2024-07-25 13:10:25.938982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.938996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.653 [2024-07-25 13:10:25.939008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.939021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.653 [2024-07-25 13:10:25.939033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.939047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.653 [2024-07-25 13:10:25.939059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.653 [2024-07-25 13:10:25.939072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.654 [2024-07-25 13:10:25.939085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.939099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.654 [2024-07-25 13:10:25.939111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.939132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.654 [2024-07-25 13:10:25.939145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.939159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.654 [2024-07-25 13:10:25.939203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.939234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.654 [2024-07-25 13:10:25.939258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.939273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.654 [2024-07-25 13:10:25.939286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.939301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.654 [2024-07-25 13:10:25.939314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.939329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.654 [2024-07-25 13:10:25.939343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.939358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.654 [2024-07-25 13:10:25.939371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.939386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.654 [2024-07-25 13:10:25.939399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.939414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.654 [2024-07-25 13:10:25.939427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.939442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.654 [2024-07-25 13:10:25.939456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.939470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.654 [2024-07-25 13:10:25.939483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.939498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.654 [2024-07-25 13:10:25.939511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.939526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.654 [2024-07-25 13:10:25.939545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.939575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.654 [2024-07-25 13:10:25.939587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.939616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.654 [2024-07-25 13:10:25.939629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.939642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.654 [2024-07-25 13:10:25.939653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.939667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.654 [2024-07-25 13:10:25.939679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.939693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.654 [2024-07-25 13:10:25.939705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.939718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.654 [2024-07-25 13:10:25.939730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.939744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.654 [2024-07-25 13:10:25.939756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.939769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.654 [2024-07-25 13:10:25.939781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.939795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.654 [2024-07-25 13:10:25.939808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.939822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.654 [2024-07-25 13:10:25.939834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.939849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.654 [2024-07-25 13:10:25.939861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.939874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.654 [2024-07-25 13:10:25.939886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.939906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.654 [2024-07-25 13:10:25.939918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.939932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.654 [2024-07-25 13:10:25.939955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.939970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.654 [2024-07-25 13:10:25.939983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.939996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.654 [2024-07-25 13:10:25.940008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.940022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.654 [2024-07-25 13:10:25.940034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.940047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.654 [2024-07-25 13:10:25.940060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.940074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.654 [2024-07-25 13:10:25.940086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.940101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.654 [2024-07-25 13:10:25.940114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.940128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.654 [2024-07-25 13:10:25.940140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.940153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.654 [2024-07-25 13:10:25.940198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.940212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.654 [2024-07-25 13:10:25.940225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.940239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.654 [2024-07-25 13:10:25.940252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.654 [2024-07-25 13:10:25.940276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.654 [2024-07-25 13:10:25.940296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.940312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.655 [2024-07-25 13:10:25.940326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.940340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.655 [2024-07-25 13:10:25.940352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.940367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.655 [2024-07-25 13:10:25.940379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.940394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.655 [2024-07-25 13:10:25.940406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.940420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.655 [2024-07-25 13:10:25.940432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.940446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.655 [2024-07-25 13:10:25.940461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.940475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.655 [2024-07-25 13:10:25.940487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.940502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.655 [2024-07-25 13:10:25.940514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.940543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.655 [2024-07-25 13:10:25.940568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.940582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.655 [2024-07-25 13:10:25.940594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.940607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.655 [2024-07-25 13:10:25.940620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.940634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.655 [2024-07-25 13:10:25.940645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.940659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.655 [2024-07-25 13:10:25.940723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.940741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.655 [2024-07-25 13:10:25.940755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.940771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.655 [2024-07-25 13:10:25.940785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.940806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.655 [2024-07-25 13:10:25.940821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.940837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.655 [2024-07-25 13:10:25.940850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.940878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.655 [2024-07-25 13:10:25.940893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.940909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.655 [2024-07-25 13:10:25.940923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.940940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.655 [2024-07-25 13:10:25.940953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.940969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.655 [2024-07-25 13:10:25.940983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.941010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.655 [2024-07-25 13:10:25.941039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.941053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.655 [2024-07-25 13:10:25.941065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.941079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.655 [2024-07-25 13:10:25.941092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.941120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.655 [2024-07-25 13:10:25.941132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.941152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.655 [2024-07-25 13:10:25.941164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.941178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.655 [2024-07-25 13:10:25.941190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.941204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.655 [2024-07-25 13:10:25.941217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.941230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.655 [2024-07-25 13:10:25.941242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.941257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.655 [2024-07-25 13:10:25.941269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.941283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.655 [2024-07-25 13:10:25.941295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.941314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.655 [2024-07-25 13:10:25.941326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.941340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.655 [2024-07-25 13:10:25.941352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.941366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.655 [2024-07-25 13:10:25.941378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.941392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.655 [2024-07-25 13:10:25.941404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.941418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.655 [2024-07-25 13:10:25.941429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.941443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.655 [2024-07-25 13:10:25.941455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.655 [2024-07-25 13:10:25.941468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb41830 is same with the state(5) to be set 00:14:48.655 [2024-07-25 13:10:25.941490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.655 [2024-07-25 13:10:25.941500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.655 [2024-07-25 13:10:25.941510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97408 len:8 PRP1 0x0 PRP2 0x0 00:14:48.656 [2024-07-25 13:10:25.941523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.656 [2024-07-25 13:10:25.941536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.656 [2024-07-25 13:10:25.941545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.656 [2024-07-25 13:10:25.941556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97736 len:8 PRP1 0x0 PRP2 0x0 00:14:48.656 [2024-07-25 13:10:25.941567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.656 [2024-07-25 13:10:25.941579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.656 [2024-07-25 13:10:25.941589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.656 [2024-07-25 13:10:25.941599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97744 len:8 PRP1 0x0 PRP2 0x0 00:14:48.656 [2024-07-25 13:10:25.941611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.656 [2024-07-25 13:10:25.941623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.656 [2024-07-25 13:10:25.941632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.656 [2024-07-25 13:10:25.941641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97752 len:8 PRP1 0x0 PRP2 0x0 00:14:48.656 [2024-07-25 13:10:25.941653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.656 [2024-07-25 13:10:25.941666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.656 [2024-07-25 13:10:25.941675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.656 [2024-07-25 13:10:25.941684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97760 len:8 PRP1 0x0 PRP2 0x0 00:14:48.656 [2024-07-25 13:10:25.941701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.656 [2024-07-25 13:10:25.941714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.656 [2024-07-25 13:10:25.941723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.656 [2024-07-25 13:10:25.941733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97768 len:8 PRP1 0x0 PRP2 0x0 00:14:48.656 [2024-07-25 13:10:25.941744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.656 [2024-07-25 13:10:25.941757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.656 [2024-07-25 13:10:25.941766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.656 [2024-07-25 13:10:25.941776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97776 len:8 PRP1 0x0 PRP2 0x0 00:14:48.656 [2024-07-25 13:10:25.941787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.656 [2024-07-25 13:10:25.941799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.656 [2024-07-25 13:10:25.941808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.656 [2024-07-25 13:10:25.941817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97784 len:8 PRP1 0x0 PRP2 0x0 00:14:48.656 [2024-07-25 13:10:25.941835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.656 [2024-07-25 13:10:25.941848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.656 [2024-07-25 13:10:25.941856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.656 [2024-07-25 13:10:25.941865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97792 len:8 PRP1 0x0 PRP2 0x0 00:14:48.656 [2024-07-25 13:10:25.941886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.656 [2024-07-25 13:10:25.941913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.656 [2024-07-25 13:10:25.941922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.656 [2024-07-25 13:10:25.941931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97800 len:8 PRP1 0x0 PRP2 0x0 00:14:48.656 [2024-07-25 13:10:25.941942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.656 [2024-07-25 13:10:25.941954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.656 [2024-07-25 13:10:25.941964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.656 [2024-07-25 13:10:25.941973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97808 len:8 PRP1 0x0 PRP2 0x0 00:14:48.656 [2024-07-25 13:10:25.941984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.656 [2024-07-25 13:10:25.941996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.656 [2024-07-25 13:10:25.942005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.656 [2024-07-25 13:10:25.942014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97816 len:8 PRP1 0x0 PRP2 0x0 00:14:48.656 [2024-07-25 13:10:25.942026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.656 [2024-07-25 13:10:25.942037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.656 [2024-07-25 13:10:25.942046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.656 [2024-07-25 13:10:25.942055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97824 len:8 PRP1 0x0 PRP2 0x0 00:14:48.656 [2024-07-25 13:10:25.942072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.656 [2024-07-25 13:10:25.942084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.656 [2024-07-25 13:10:25.942093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.656 [2024-07-25 13:10:25.942102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97832 len:8 PRP1 0x0 PRP2 0x0 00:14:48.656 [2024-07-25 13:10:25.942114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.656 [2024-07-25 13:10:25.942125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.656 [2024-07-25 13:10:25.942134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.656 [2024-07-25 13:10:25.942143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97840 len:8 PRP1 0x0 PRP2 0x0 00:14:48.656 [2024-07-25 13:10:25.942155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.656 [2024-07-25 13:10:25.942183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.656 [2024-07-25 13:10:25.942193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.656 [2024-07-25 13:10:25.942213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97848 len:8 PRP1 0x0 PRP2 0x0 00:14:48.656 [2024-07-25 13:10:25.942226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.656 [2024-07-25 13:10:25.942239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.656 [2024-07-25 13:10:25.942248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.656 [2024-07-25 13:10:25.942258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97856 len:8 PRP1 0x0 PRP2 0x0 00:14:48.656 [2024-07-25 13:10:25.942270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.656 [2024-07-25 13:10:25.942342] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb41830 was disconnected and freed. reset controller. 00:14:48.656 [2024-07-25 13:10:25.942378] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:14:48.656 [2024-07-25 13:10:25.942448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.656 [2024-07-25 13:10:25.942468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.656 [2024-07-25 13:10:25.942483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.656 [2024-07-25 13:10:25.942495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.656 [2024-07-25 13:10:25.942508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.656 [2024-07-25 13:10:25.942521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.656 [2024-07-25 13:10:25.942563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.656 [2024-07-25 13:10:25.942575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.656 [2024-07-25 13:10:25.942587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:48.656 [2024-07-25 13:10:25.942656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad2570 (9): Bad file descriptor 00:14:48.656 [2024-07-25 13:10:25.946712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:48.656 [2024-07-25 13:10:25.982773] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:48.656 [2024-07-25 13:10:29.545853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.656 [2024-07-25 13:10:29.545972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.656 [2024-07-25 13:10:29.546000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.656 [2024-07-25 13:10:29.546017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.656 [2024-07-25 13:10:29.546035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:126656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.656 [2024-07-25 13:10:29.546049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.656 [2024-07-25 13:10:29.546064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:126664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.656 [2024-07-25 13:10:29.546125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.657 [2024-07-25 13:10:29.546143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.657 [2024-07-25 13:10:29.546156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.657 [2024-07-25 13:10:29.546187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:126680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.657 [2024-07-25 13:10:29.546200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.657 [2024-07-25 13:10:29.546216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.657 [2024-07-25 13:10:29.546229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.657 [2024-07-25 13:10:29.546244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.657 [2024-07-25 13:10:29.546258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.657 [2024-07-25 13:10:29.546273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:126704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.657 [2024-07-25 13:10:29.546287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.657 [2024-07-25 13:10:29.546302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:126712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.657 [2024-07-25 13:10:29.546316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.657 [2024-07-25 13:10:29.546332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.657 [2024-07-25 13:10:29.546345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.657 [2024-07-25 13:10:29.546361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.657 [2024-07-25 13:10:29.546375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.657 [2024-07-25 13:10:29.546390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:126288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.657 [2024-07-25 13:10:29.546404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.657 [2024-07-25 13:10:29.546419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.657 [2024-07-25 13:10:29.546433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.657 [2024-07-25 13:10:29.546449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.657 [2024-07-25 13:10:29.546462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.657 [2024-07-25 13:10:29.546478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.657 [2024-07-25 13:10:29.546492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.657 [2024-07-25 13:10:29.546515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.657 [2024-07-25 13:10:29.546544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.657 [2024-07-25 13:10:29.546559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.657 [2024-07-25 13:10:29.546573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.657 [2024-07-25 13:10:29.546587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.657 [2024-07-25 13:10:29.546600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.657 [2024-07-25 13:10:29.546615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.657 [2024-07-25 13:10:29.546628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.657 [2024-07-25 13:10:29.546643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.657 [2024-07-25 13:10:29.546656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.657 [2024-07-25 13:10:29.546671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.657 [2024-07-25 13:10:29.546685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.657 [2024-07-25 13:10:29.546700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.657 [2024-07-25 13:10:29.546713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.657 [2024-07-25 13:10:29.546728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.657 [2024-07-25 13:10:29.546741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.657 [2024-07-25 13:10:29.546759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.657 [2024-07-25 13:10:29.546773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.657 [2024-07-25 13:10:29.546787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.657 [2024-07-25 13:10:29.546802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.657 [2024-07-25 13:10:29.546817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.657 [2024-07-25 13:10:29.546830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.657 [2024-07-25 13:10:29.546845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.657 [2024-07-25 13:10:29.546870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.657 [2024-07-25 13:10:29.546899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.657 [2024-07-25 13:10:29.546920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.657 [2024-07-25 13:10:29.546936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.657 [2024-07-25 13:10:29.546949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.657 [2024-07-25 13:10:29.546964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.657 [2024-07-25 13:10:29.546977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.657 [2024-07-25 13:10:29.546992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:126760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.657 [2024-07-25 13:10:29.547005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.657 [2024-07-25 13:10:29.547019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:126768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.657 [2024-07-25 13:10:29.547032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.547048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:126776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.658 [2024-07-25 13:10:29.547061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.547076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:126784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.658 [2024-07-25 13:10:29.547090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.547104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:126792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.658 [2024-07-25 13:10:29.547117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.547132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.658 [2024-07-25 13:10:29.547145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.547160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:126808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.658 [2024-07-25 13:10:29.547189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.547204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:126816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.658 [2024-07-25 13:10:29.547218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.547233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:126824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.658 [2024-07-25 13:10:29.547258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.547273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.658 [2024-07-25 13:10:29.547287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.547302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:126840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.658 [2024-07-25 13:10:29.547322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.547338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.658 [2024-07-25 13:10:29.547352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.547367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.658 [2024-07-25 13:10:29.547381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.547396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.658 [2024-07-25 13:10:29.547410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.547425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:126424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.658 [2024-07-25 13:10:29.547438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.547454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.658 [2024-07-25 13:10:29.547468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.547493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:126440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.658 [2024-07-25 13:10:29.547507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.547522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.658 [2024-07-25 13:10:29.547535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.547551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.658 [2024-07-25 13:10:29.547565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.547580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.658 [2024-07-25 13:10:29.547593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.547609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:126856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.658 [2024-07-25 13:10:29.547623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.547638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:126864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.658 [2024-07-25 13:10:29.547651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.547666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:126872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.658 [2024-07-25 13:10:29.547679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.547700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.658 [2024-07-25 13:10:29.547715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.547745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:126888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.658 [2024-07-25 13:10:29.547759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.547773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:126896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.658 [2024-07-25 13:10:29.547786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.547802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:126904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.658 [2024-07-25 13:10:29.547815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.547831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.658 [2024-07-25 13:10:29.547844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.547869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.658 [2024-07-25 13:10:29.547885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.547900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:126928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.658 [2024-07-25 13:10:29.547913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.547928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.658 [2024-07-25 13:10:29.547942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.547958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:126944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.658 [2024-07-25 13:10:29.547971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.547986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:126952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.658 [2024-07-25 13:10:29.547999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.548013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:126960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.658 [2024-07-25 13:10:29.548026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.548041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:126968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.658 [2024-07-25 13:10:29.548054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.548069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:126976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.658 [2024-07-25 13:10:29.548089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.548104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:126984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.658 [2024-07-25 13:10:29.548118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.548142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:126992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.658 [2024-07-25 13:10:29.548155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.548187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.658 [2024-07-25 13:10:29.548200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.548215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:126464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.658 [2024-07-25 13:10:29.548229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.658 [2024-07-25 13:10:29.548244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.658 [2024-07-25 13:10:29.548258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.548275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.659 [2024-07-25 13:10:29.548289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.548305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.659 [2024-07-25 13:10:29.548319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.548347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.659 [2024-07-25 13:10:29.548360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.548375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.659 [2024-07-25 13:10:29.548389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.548404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.659 [2024-07-25 13:10:29.548418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.548434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.659 [2024-07-25 13:10:29.548447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.548462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.659 [2024-07-25 13:10:29.548476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.548529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.659 [2024-07-25 13:10:29.548543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.548559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.659 [2024-07-25 13:10:29.548573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.548588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.659 [2024-07-25 13:10:29.548601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.548617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.659 [2024-07-25 13:10:29.548630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.548646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.659 [2024-07-25 13:10:29.548659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.548703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.659 [2024-07-25 13:10:29.548719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.548734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.659 [2024-07-25 13:10:29.548749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.548764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:127008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.659 [2024-07-25 13:10:29.548778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.548801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:127016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.659 [2024-07-25 13:10:29.548815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.548831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:127024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.659 [2024-07-25 13:10:29.548860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.548877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:127032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.659 [2024-07-25 13:10:29.548903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.548919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:127040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.659 [2024-07-25 13:10:29.548933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.548949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:127048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.659 [2024-07-25 13:10:29.548980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.548997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.659 [2024-07-25 13:10:29.549012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.549027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:127064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.659 [2024-07-25 13:10:29.549042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.549057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:127072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.659 [2024-07-25 13:10:29.549071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.549096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.659 [2024-07-25 13:10:29.549137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.549152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:127088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.659 [2024-07-25 13:10:29.549188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.549204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:127096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.659 [2024-07-25 13:10:29.549218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.549233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.659 [2024-07-25 13:10:29.549247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.549264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.659 [2024-07-25 13:10:29.549278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.549293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.659 [2024-07-25 13:10:29.549307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.549323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.659 [2024-07-25 13:10:29.549337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.549353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.659 [2024-07-25 13:10:29.549367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.549382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.659 [2024-07-25 13:10:29.549396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.549418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.659 [2024-07-25 13:10:29.549432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.549448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb41d50 is same with the state(5) to be set 00:14:48.659 [2024-07-25 13:10:29.549465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.659 [2024-07-25 13:10:29.549476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.659 [2024-07-25 13:10:29.549512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126648 len:8 PRP1 0x0 PRP2 0x0 00:14:48.659 [2024-07-25 13:10:29.549541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.549554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.659 [2024-07-25 13:10:29.549564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.659 [2024-07-25 13:10:29.549574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127104 len:8 PRP1 0x0 PRP2 0x0 00:14:48.659 [2024-07-25 13:10:29.549587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.659 [2024-07-25 13:10:29.549601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.659 [2024-07-25 13:10:29.549611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.659 [2024-07-25 13:10:29.549621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127112 len:8 PRP1 0x0 PRP2 0x0 00:14:48.659 [2024-07-25 13:10:29.549640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.660 [2024-07-25 13:10:29.549653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.660 [2024-07-25 13:10:29.549663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.660 [2024-07-25 13:10:29.549673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127120 len:8 PRP1 0x0 PRP2 0x0 00:14:48.660 [2024-07-25 13:10:29.549687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.660 [2024-07-25 13:10:29.549700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.660 [2024-07-25 13:10:29.549709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.660 [2024-07-25 13:10:29.549720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127128 len:8 PRP1 0x0 PRP2 0x0 00:14:48.660 [2024-07-25 13:10:29.549733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.660 [2024-07-25 13:10:29.549746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.660 [2024-07-25 13:10:29.549756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.660 [2024-07-25 13:10:29.549766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127136 len:8 PRP1 0x0 PRP2 0x0 00:14:48.660 [2024-07-25 13:10:29.549779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.660 [2024-07-25 13:10:29.549792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.660 [2024-07-25 13:10:29.549802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.660 [2024-07-25 13:10:29.549812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127144 len:8 PRP1 0x0 PRP2 0x0 00:14:48.660 [2024-07-25 13:10:29.549825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.660 [2024-07-25 13:10:29.549845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.660 [2024-07-25 13:10:29.549855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.660 [2024-07-25 13:10:29.549865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127152 len:8 PRP1 0x0 PRP2 0x0 00:14:48.660 [2024-07-25 13:10:29.549878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.660 [2024-07-25 13:10:29.549891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.660 [2024-07-25 13:10:29.549901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.660 [2024-07-25 13:10:29.549921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127160 len:8 PRP1 0x0 PRP2 0x0 00:14:48.660 [2024-07-25 13:10:29.549935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.660 [2024-07-25 13:10:29.549948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.660 [2024-07-25 13:10:29.549957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.660 [2024-07-25 13:10:29.549967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127168 len:8 PRP1 0x0 PRP2 0x0 00:14:48.660 [2024-07-25 13:10:29.549981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.660 [2024-07-25 13:10:29.549994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.660 [2024-07-25 13:10:29.550004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.660 [2024-07-25 13:10:29.550014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127176 len:8 PRP1 0x0 PRP2 0x0 00:14:48.660 [2024-07-25 13:10:29.550034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.660 [2024-07-25 13:10:29.550048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.660 [2024-07-25 13:10:29.550058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.660 [2024-07-25 13:10:29.550068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127184 len:8 PRP1 0x0 PRP2 0x0 00:14:48.660 [2024-07-25 13:10:29.550081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.660 [2024-07-25 13:10:29.550094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.660 [2024-07-25 13:10:29.550103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.660 [2024-07-25 13:10:29.550113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127192 len:8 PRP1 0x0 PRP2 0x0 00:14:48.660 [2024-07-25 13:10:29.550126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.660 [2024-07-25 13:10:29.550139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.660 [2024-07-25 13:10:29.550149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.660 [2024-07-25 13:10:29.550159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127200 len:8 PRP1 0x0 PRP2 0x0 00:14:48.660 [2024-07-25 13:10:29.550203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.660 [2024-07-25 13:10:29.550216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.660 [2024-07-25 13:10:29.550227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.660 [2024-07-25 13:10:29.550237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127208 len:8 PRP1 0x0 PRP2 0x0 00:14:48.660 [2024-07-25 13:10:29.550261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.660 [2024-07-25 13:10:29.550276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.660 [2024-07-25 13:10:29.550287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.660 [2024-07-25 13:10:29.550297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127216 len:8 PRP1 0x0 PRP2 0x0 00:14:48.660 [2024-07-25 13:10:29.550311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.660 [2024-07-25 13:10:29.550325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.660 [2024-07-25 13:10:29.550346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.660 [2024-07-25 13:10:29.550357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127224 len:8 PRP1 0x0 PRP2 0x0 00:14:48.660 [2024-07-25 13:10:29.550371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.660 [2024-07-25 13:10:29.550385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.660 [2024-07-25 13:10:29.550396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.660 [2024-07-25 13:10:29.550407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127232 len:8 PRP1 0x0 PRP2 0x0 00:14:48.660 [2024-07-25 13:10:29.550421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.660 [2024-07-25 13:10:29.550435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.660 [2024-07-25 13:10:29.550445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.660 [2024-07-25 13:10:29.550456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127240 len:8 PRP1 0x0 PRP2 0x0 00:14:48.660 [2024-07-25 13:10:29.550470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.660 [2024-07-25 13:10:29.550484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.660 [2024-07-25 13:10:29.550494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.660 [2024-07-25 13:10:29.550520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127248 len:8 PRP1 0x0 PRP2 0x0 00:14:48.660 [2024-07-25 13:10:29.550558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.660 [2024-07-25 13:10:29.550572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.660 [2024-07-25 13:10:29.550582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.660 [2024-07-25 13:10:29.550592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127256 len:8 PRP1 0x0 PRP2 0x0 00:14:48.660 [2024-07-25 13:10:29.550605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.660 [2024-07-25 13:10:29.550618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.660 [2024-07-25 13:10:29.550627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.660 [2024-07-25 13:10:29.550637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127264 len:8 PRP1 0x0 PRP2 0x0 00:14:48.660 [2024-07-25 13:10:29.550650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.660 [2024-07-25 13:10:29.550663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.660 [2024-07-25 13:10:29.550677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.660 [2024-07-25 13:10:29.550688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127272 len:8 PRP1 0x0 PRP2 0x0 00:14:48.660 [2024-07-25 13:10:29.550701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.660 [2024-07-25 13:10:29.550767] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb41d50 was disconnected and freed. reset controller. 00:14:48.660 [2024-07-25 13:10:29.550785] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:14:48.660 [2024-07-25 13:10:29.550843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.660 [2024-07-25 13:10:29.550864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.660 [2024-07-25 13:10:29.550878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.660 [2024-07-25 13:10:29.550913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.660 [2024-07-25 13:10:29.550930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.661 [2024-07-25 13:10:29.550943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:29.550956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.661 [2024-07-25 13:10:29.550970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:29.550983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:48.661 [2024-07-25 13:10:29.551022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad2570 (9): Bad file descriptor 00:14:48.661 [2024-07-25 13:10:29.554993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:48.661 [2024-07-25 13:10:29.595364] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:48.661 [2024-07-25 13:10:34.063709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.661 [2024-07-25 13:10:34.064354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.064478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.661 [2024-07-25 13:10:34.064572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.064645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.661 [2024-07-25 13:10:34.064763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.064839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.661 [2024-07-25 13:10:34.064950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.065027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.661 [2024-07-25 13:10:34.065119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.065214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.661 [2024-07-25 13:10:34.065293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.065368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.661 [2024-07-25 13:10:34.065441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.065509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.661 [2024-07-25 13:10:34.065581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.065650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.661 [2024-07-25 13:10:34.065722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.065800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.661 [2024-07-25 13:10:34.065895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.065967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.661 [2024-07-25 13:10:34.066042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.066109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.661 [2024-07-25 13:10:34.066182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.066249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.661 [2024-07-25 13:10:34.066327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.066400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.661 [2024-07-25 13:10:34.066473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.066541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.661 [2024-07-25 13:10:34.066614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.066681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.661 [2024-07-25 13:10:34.066758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.066827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.661 [2024-07-25 13:10:34.066926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.066996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.661 [2024-07-25 13:10:34.067083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.067153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.661 [2024-07-25 13:10:34.067226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.067294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.661 [2024-07-25 13:10:34.067366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.067434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.661 [2024-07-25 13:10:34.067511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.067578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.661 [2024-07-25 13:10:34.067651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.067718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.661 [2024-07-25 13:10:34.067791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.067878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.661 [2024-07-25 13:10:34.067960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.068028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.661 [2024-07-25 13:10:34.068106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.068173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.661 [2024-07-25 13:10:34.068248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.068315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.661 [2024-07-25 13:10:34.068388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.068454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.661 [2024-07-25 13:10:34.068529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.068596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.661 [2024-07-25 13:10:34.068696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.068774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.661 [2024-07-25 13:10:34.068868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.068945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.661 [2024-07-25 13:10:34.069047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.069135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.661 [2024-07-25 13:10:34.069208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.069295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.661 [2024-07-25 13:10:34.069370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.069430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.661 [2024-07-25 13:10:34.069500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.661 [2024-07-25 13:10:34.069567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.661 [2024-07-25 13:10:34.069639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.069705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.662 [2024-07-25 13:10:34.069778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.069857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.662 [2024-07-25 13:10:34.069937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.070011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.662 [2024-07-25 13:10:34.070083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.070154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.662 [2024-07-25 13:10:34.070221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.070288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.662 [2024-07-25 13:10:34.070362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.070430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.662 [2024-07-25 13:10:34.070501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.070570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.662 [2024-07-25 13:10:34.070645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.070705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.662 [2024-07-25 13:10:34.070778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.070867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.662 [2024-07-25 13:10:34.070948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.071009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.662 [2024-07-25 13:10:34.071084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.071157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.662 [2024-07-25 13:10:34.071231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.071298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.662 [2024-07-25 13:10:34.071373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.071440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.662 [2024-07-25 13:10:34.071514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.071581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.662 [2024-07-25 13:10:34.071654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.071722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.662 [2024-07-25 13:10:34.071794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.071881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.662 [2024-07-25 13:10:34.071959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.072027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.662 [2024-07-25 13:10:34.072101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.072168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.662 [2024-07-25 13:10:34.072241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.072309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.662 [2024-07-25 13:10:34.072368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.072435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.662 [2024-07-25 13:10:34.072508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.072568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.662 [2024-07-25 13:10:34.072650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.072761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.662 [2024-07-25 13:10:34.072834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.072933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.662 [2024-07-25 13:10:34.073015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.073104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.662 [2024-07-25 13:10:34.073172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.073233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.662 [2024-07-25 13:10:34.073302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.073370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.662 [2024-07-25 13:10:34.073392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.073407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.662 [2024-07-25 13:10:34.073419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.073432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.662 [2024-07-25 13:10:34.073445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.073459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.662 [2024-07-25 13:10:34.073471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.073485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.662 [2024-07-25 13:10:34.073497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.073511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.662 [2024-07-25 13:10:34.073523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.073536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.662 [2024-07-25 13:10:34.073549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.662 [2024-07-25 13:10:34.073562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.663 [2024-07-25 13:10:34.073575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.073598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.663 [2024-07-25 13:10:34.073612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.073625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.663 [2024-07-25 13:10:34.073637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.073651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.663 [2024-07-25 13:10:34.073663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.073677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.663 [2024-07-25 13:10:34.073689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.073703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.663 [2024-07-25 13:10:34.073715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.073729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.663 [2024-07-25 13:10:34.073741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.073755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.663 [2024-07-25 13:10:34.073767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.073780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.663 [2024-07-25 13:10:34.073793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.073808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.663 [2024-07-25 13:10:34.073822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.073836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.663 [2024-07-25 13:10:34.073866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.073883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.663 [2024-07-25 13:10:34.073896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.073910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.663 [2024-07-25 13:10:34.073922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.073936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.663 [2024-07-25 13:10:34.073948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.073969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.663 [2024-07-25 13:10:34.073982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.073996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.663 [2024-07-25 13:10:34.074008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.074022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.663 [2024-07-25 13:10:34.074034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.074048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.663 [2024-07-25 13:10:34.074060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.074073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.663 [2024-07-25 13:10:34.074086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.074099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.663 [2024-07-25 13:10:34.074112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.074125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.663 [2024-07-25 13:10:34.074137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.074151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.663 [2024-07-25 13:10:34.074163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.074177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.663 [2024-07-25 13:10:34.074189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.074203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.663 [2024-07-25 13:10:34.074215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.074229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.663 [2024-07-25 13:10:34.074242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.074256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.663 [2024-07-25 13:10:34.074269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.074282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.663 [2024-07-25 13:10:34.074301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.074316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.663 [2024-07-25 13:10:34.074329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.074343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.663 [2024-07-25 13:10:34.074355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.074368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.663 [2024-07-25 13:10:34.074381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.074394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.663 [2024-07-25 13:10:34.074407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.074420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.663 [2024-07-25 13:10:34.074432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.074446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.663 [2024-07-25 13:10:34.074460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.074473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.663 [2024-07-25 13:10:34.074486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.074499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.663 [2024-07-25 13:10:34.074512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.074526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.663 [2024-07-25 13:10:34.074538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.074552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.663 [2024-07-25 13:10:34.074565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.074579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.663 [2024-07-25 13:10:34.074592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.074606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.663 [2024-07-25 13:10:34.074618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.663 [2024-07-25 13:10:34.074638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:48.664 [2024-07-25 13:10:34.074651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.664 [2024-07-25 13:10:34.074665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb41a10 is same with the state(5) to be set 00:14:48.664 [2024-07-25 13:10:34.074683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.664 [2024-07-25 13:10:34.074694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.664 [2024-07-25 13:10:34.074704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98328 len:8 PRP1 0x0 PRP2 0x0 00:14:48.664 [2024-07-25 13:10:34.074717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.664 [2024-07-25 13:10:34.074731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.664 [2024-07-25 13:10:34.074740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.664 [2024-07-25 13:10:34.074750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98880 len:8 PRP1 0x0 PRP2 0x0 00:14:48.664 [2024-07-25 13:10:34.074762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.664 [2024-07-25 13:10:34.074774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.664 [2024-07-25 13:10:34.074783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.664 [2024-07-25 13:10:34.074793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98888 len:8 PRP1 0x0 PRP2 0x0 00:14:48.664 [2024-07-25 13:10:34.074805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.664 [2024-07-25 13:10:34.074817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.664 [2024-07-25 13:10:34.074826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.664 [2024-07-25 13:10:34.074847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98896 len:8 PRP1 0x0 PRP2 0x0 00:14:48.664 [2024-07-25 13:10:34.074860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.664 [2024-07-25 13:10:34.074872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.664 [2024-07-25 13:10:34.074881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.664 [2024-07-25 13:10:34.074891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98904 len:8 PRP1 0x0 PRP2 0x0 00:14:48.664 [2024-07-25 13:10:34.074904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.664 [2024-07-25 13:10:34.074917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.664 [2024-07-25 13:10:34.074926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.664 [2024-07-25 13:10:34.074936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98912 len:8 PRP1 0x0 PRP2 0x0 00:14:48.664 [2024-07-25 13:10:34.074947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.664 [2024-07-25 13:10:34.074959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.664 [2024-07-25 13:10:34.074969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.664 [2024-07-25 13:10:34.074978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98920 len:8 PRP1 0x0 PRP2 0x0 00:14:48.664 [2024-07-25 13:10:34.074997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.664 [2024-07-25 13:10:34.075011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.664 [2024-07-25 13:10:34.075020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.664 [2024-07-25 13:10:34.075030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98928 len:8 PRP1 0x0 PRP2 0x0 00:14:48.664 [2024-07-25 13:10:34.075042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.664 [2024-07-25 13:10:34.075054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.664 [2024-07-25 13:10:34.075065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.664 [2024-07-25 13:10:34.075075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98936 len:8 PRP1 0x0 PRP2 0x0 00:14:48.664 [2024-07-25 13:10:34.075087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.664 [2024-07-25 13:10:34.075099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.664 [2024-07-25 13:10:34.075109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.664 [2024-07-25 13:10:34.075118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98944 len:8 PRP1 0x0 PRP2 0x0 00:14:48.664 [2024-07-25 13:10:34.075130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.664 [2024-07-25 13:10:34.075143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.664 [2024-07-25 13:10:34.075152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.664 [2024-07-25 13:10:34.075162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98952 len:8 PRP1 0x0 PRP2 0x0 00:14:48.664 [2024-07-25 13:10:34.075182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.664 [2024-07-25 13:10:34.075195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.664 [2024-07-25 13:10:34.075204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.664 [2024-07-25 13:10:34.075213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98960 len:8 PRP1 0x0 PRP2 0x0 00:14:48.664 [2024-07-25 13:10:34.075225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.664 [2024-07-25 13:10:34.075238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.664 [2024-07-25 13:10:34.075246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.664 [2024-07-25 13:10:34.075256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98968 len:8 PRP1 0x0 PRP2 0x0 00:14:48.664 [2024-07-25 13:10:34.075268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.664 [2024-07-25 13:10:34.075280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.664 [2024-07-25 13:10:34.075289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.664 [2024-07-25 13:10:34.075299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98336 len:8 PRP1 0x0 PRP2 0x0 00:14:48.664 [2024-07-25 13:10:34.075311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.664 [2024-07-25 13:10:34.075323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.664 [2024-07-25 13:10:34.075332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.664 [2024-07-25 13:10:34.075347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98344 len:8 PRP1 0x0 PRP2 0x0 00:14:48.664 [2024-07-25 13:10:34.075360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.664 [2024-07-25 13:10:34.075372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.664 [2024-07-25 13:10:34.075382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.664 [2024-07-25 13:10:34.075392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98352 len:8 PRP1 0x0 PRP2 0x0 00:14:48.664 [2024-07-25 13:10:34.075403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.664 [2024-07-25 13:10:34.075416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.664 [2024-07-25 13:10:34.075425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.664 [2024-07-25 13:10:34.075435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98360 len:8 PRP1 0x0 PRP2 0x0 00:14:48.664 [2024-07-25 13:10:34.075447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.664 [2024-07-25 13:10:34.075460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.664 [2024-07-25 13:10:34.075470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.664 [2024-07-25 13:10:34.075479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98368 len:8 PRP1 0x0 PRP2 0x0 00:14:48.664 [2024-07-25 13:10:34.075491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.664 [2024-07-25 13:10:34.075503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.664 [2024-07-25 13:10:34.075512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.664 [2024-07-25 13:10:34.075522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98376 len:8 PRP1 0x0 PRP2 0x0 00:14:48.664 [2024-07-25 13:10:34.075534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.664 [2024-07-25 13:10:34.075546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.664 [2024-07-25 13:10:34.075555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.664 [2024-07-25 13:10:34.075565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98384 len:8 PRP1 0x0 PRP2 0x0 00:14:48.664 [2024-07-25 13:10:34.075576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.664 [2024-07-25 13:10:34.075589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:48.664 [2024-07-25 13:10:34.075598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:48.664 [2024-07-25 13:10:34.075607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98392 len:8 PRP1 0x0 PRP2 0x0 00:14:48.664 [2024-07-25 13:10:34.075620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.664 [2024-07-25 13:10:34.075689] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb41a10 was disconnected and freed. reset controller. 00:14:48.664 [2024-07-25 13:10:34.075708] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:14:48.664 [2024-07-25 13:10:34.075784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.665 [2024-07-25 13:10:34.075805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.665 [2024-07-25 13:10:34.075818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.665 [2024-07-25 13:10:34.075907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.665 [2024-07-25 13:10:34.075923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.665 [2024-07-25 13:10:34.075936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.665 [2024-07-25 13:10:34.075949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.665 [2024-07-25 13:10:34.075961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.665 [2024-07-25 13:10:34.075973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:48.665 [2024-07-25 13:10:34.076032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad2570 (9): Bad file descriptor 00:14:48.665 [2024-07-25 13:10:34.079421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:48.665 [2024-07-25 13:10:34.113281] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:48.665 00:14:48.665 Latency(us) 00:14:48.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.665 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:48.665 Verification LBA range: start 0x0 length 0x4000 00:14:48.665 NVMe0n1 : 15.01 10081.14 39.38 227.96 0.00 12385.80 629.29 17873.45 00:14:48.665 =================================================================================================================== 00:14:48.665 Total : 10081.14 39.38 227.96 0.00 12385.80 629.29 17873.45 00:14:48.665 Received shutdown signal, test time was about 15.000000 seconds 00:14:48.665 00:14:48.665 Latency(us) 00:14:48.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.665 =================================================================================================================== 00:14:48.665 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:48.665 13:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:14:48.665 13:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:14:48.665 13:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:14:48.665 13:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=74985 00:14:48.665 13:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:14:48.665 13:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 74985 /var/tmp/bdevperf.sock 00:14:48.665 13:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 74985 ']' 00:14:48.665 13:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:48.665 13:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:48.665 13:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:48.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:48.665 13:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:48.665 13:10:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:49.232 13:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:49.232 13:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:14:49.232 13:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:49.232 [2024-07-25 13:10:41.343615] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:49.232 13:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:49.491 [2024-07-25 13:10:41.614437] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:14:49.491 13:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:49.750 NVMe0n1 00:14:49.750 13:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:50.317 00:14:50.317 13:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:50.317 00:14:50.576 13:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:50.576 13:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:14:50.576 13:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:50.835 13:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:14:54.132 13:10:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:54.132 13:10:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:14:54.132 13:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:54.132 13:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75067 00:14:54.132 13:10:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75067 00:14:55.561 0 00:14:55.561 13:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:55.561 [2024-07-25 13:10:40.202642] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:55.561 [2024-07-25 13:10:40.202750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74985 ] 00:14:55.561 [2024-07-25 13:10:40.341260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.561 [2024-07-25 13:10:40.453584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.561 [2024-07-25 13:10:40.526052] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:55.561 [2024-07-25 13:10:42.897577] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:14:55.561 [2024-07-25 13:10:42.897934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:55.561 [2024-07-25 13:10:42.898036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.561 [2024-07-25 13:10:42.898120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:55.561 [2024-07-25 13:10:42.898197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.561 [2024-07-25 13:10:42.898262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:55.561 [2024-07-25 13:10:42.898338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.561 [2024-07-25 13:10:42.898402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:55.561 [2024-07-25 13:10:42.898423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.561 [2024-07-25 13:10:42.898437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:55.561 [2024-07-25 13:10:42.898476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:55.561 [2024-07-25 13:10:42.898504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cad570 (9): Bad file descriptor 00:14:55.561 [2024-07-25 13:10:42.908575] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:55.561 Running I/O for 1 seconds... 00:14:55.561 00:14:55.561 Latency(us) 00:14:55.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.561 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:55.561 Verification LBA range: start 0x0 length 0x4000 00:14:55.561 NVMe0n1 : 1.01 9933.06 38.80 0.00 0.00 12807.44 1377.75 16562.73 00:14:55.561 =================================================================================================================== 00:14:55.561 Total : 9933.06 38.80 0.00 0.00 12807.44 1377.75 16562.73 00:14:55.561 13:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:55.561 13:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:14:55.561 13:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:55.820 13:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:55.820 13:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:14:56.088 13:10:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:56.088 13:10:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:14:59.368 13:10:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:14:59.368 13:10:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:59.368 13:10:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 74985 00:14:59.368 13:10:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 74985 ']' 00:14:59.368 13:10:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 74985 00:14:59.368 13:10:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:14:59.368 13:10:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:59.368 13:10:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74985 00:14:59.368 killing process with pid 74985 00:14:59.368 13:10:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:59.368 13:10:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:59.368 13:10:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74985' 00:14:59.369 13:10:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 74985 00:14:59.369 13:10:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 74985 00:14:59.626 13:10:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:14:59.626 13:10:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.884 13:10:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:14:59.884 13:10:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:59.884 13:10:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:14:59.884 13:10:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:59.884 13:10:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:14:59.884 13:10:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:59.884 13:10:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:14:59.884 13:10:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:59.884 13:10:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:59.884 rmmod nvme_tcp 00:14:59.884 rmmod nvme_fabrics 00:14:59.884 rmmod nvme_keyring 00:14:59.884 13:10:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:59.884 13:10:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:14:59.884 13:10:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:14:59.884 13:10:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 74730 ']' 00:14:59.884 13:10:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 74730 00:14:59.884 13:10:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 74730 ']' 00:14:59.884 13:10:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 74730 00:14:59.884 13:10:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:14:59.884 13:10:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:59.884 13:10:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74730 00:14:59.885 killing process with pid 74730 00:14:59.885 13:10:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:59.885 13:10:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:59.885 13:10:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74730' 00:14:59.885 13:10:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 74730 00:14:59.885 13:10:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 74730 00:15:00.142 13:10:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:00.142 13:10:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:00.142 13:10:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:00.142 13:10:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:00.142 13:10:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:00.142 13:10:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.142 13:10:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:00.142 13:10:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.142 13:10:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:00.142 00:15:00.142 real 0m32.320s 00:15:00.142 user 2m4.539s 00:15:00.142 sys 0m5.759s 00:15:00.142 13:10:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:00.142 13:10:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:00.142 ************************************ 00:15:00.142 END TEST nvmf_failover 00:15:00.142 ************************************ 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:00.401 ************************************ 00:15:00.401 START TEST nvmf_host_discovery 00:15:00.401 ************************************ 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:00.401 * Looking for test storage... 00:15:00.401 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:00.401 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:00.402 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:00.402 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:00.402 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:00.402 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:00.402 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:00.402 Cannot find device "nvmf_tgt_br" 00:15:00.402 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:15:00.402 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:00.402 Cannot find device "nvmf_tgt_br2" 00:15:00.402 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:15:00.402 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:00.402 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:00.402 Cannot find device "nvmf_tgt_br" 00:15:00.402 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:15:00.402 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:00.402 Cannot find device "nvmf_tgt_br2" 00:15:00.402 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:15:00.402 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:00.402 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:00.402 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:00.402 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:00.402 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:15:00.402 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:00.402 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:00.402 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:15:00.402 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:00.402 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:00.660 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:00.660 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:00.660 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:00.660 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:00.660 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:00.660 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:00.660 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:00.660 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:00.660 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:00.660 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:00.660 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:00.660 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:00.660 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:00.660 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:00.660 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:00.660 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:00.660 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:00.661 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:00.661 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:00.661 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:00.661 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:00.661 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:00.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:00.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:15:00.661 00:15:00.661 --- 10.0.0.2 ping statistics --- 00:15:00.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.661 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:15:00.661 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:00.661 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:00.661 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:15:00.661 00:15:00.661 --- 10.0.0.3 ping statistics --- 00:15:00.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.661 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:00.661 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:00.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:00.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:15:00.661 00:15:00.661 --- 10.0.0.1 ping statistics --- 00:15:00.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.661 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:15:00.661 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:00.661 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:15:00.661 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:00.661 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:00.661 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:00.661 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:00.661 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:00.661 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:00.661 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:00.661 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:00.661 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:00.661 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:00.661 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:00.661 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=75329 00:15:00.661 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:00.661 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 75329 00:15:00.661 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 75329 ']' 00:15:00.661 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.661 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:00.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.661 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.661 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:00.661 13:10:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:00.919 [2024-07-25 13:10:52.859255] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:00.919 [2024-07-25 13:10:52.859356] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.919 [2024-07-25 13:10:52.996400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.919 [2024-07-25 13:10:53.082282] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.919 [2024-07-25 13:10:53.082329] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.919 [2024-07-25 13:10:53.082355] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:00.919 [2024-07-25 13:10:53.082362] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:00.919 [2024-07-25 13:10:53.082369] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.919 [2024-07-25 13:10:53.082394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.177 [2024-07-25 13:10:53.132955] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:01.743 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:01.743 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:15:01.743 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:01.743 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:01.743 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:01.743 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:01.743 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:01.743 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.743 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:01.743 [2024-07-25 13:10:53.794273] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:01.743 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.743 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:15:01.743 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.743 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:01.743 [2024-07-25 13:10:53.802387] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:01.743 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.743 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:01.743 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.743 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:01.743 null0 00:15:01.743 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.743 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:01.743 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.743 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:01.743 null1 00:15:01.743 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.743 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:01.743 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.743 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:01.743 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.743 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75361 00:15:01.743 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:01.743 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75361 /tmp/host.sock 00:15:01.743 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 75361 ']' 00:15:01.743 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:15:01.743 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:01.744 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:01.744 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:01.744 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:01.744 13:10:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:01.744 [2024-07-25 13:10:53.878132] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:01.744 [2024-07-25 13:10:53.878210] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75361 ] 00:15:02.002 [2024-07-25 13:10:54.009152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.002 [2024-07-25 13:10:54.096060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.002 [2024-07-25 13:10:54.150705] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.938 13:10:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:02.938 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.938 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:15:02.938 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:15:02.938 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:02.938 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:02.938 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.938 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.938 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:02.938 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:02.938 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.938 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:15:02.938 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:15:02.938 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.938 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:02.938 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.938 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:15:03.195 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:03.195 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.195 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.195 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:03.195 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:03.195 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:03.195 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.195 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:15:03.195 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:15:03.195 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.196 [2024-07-25 13:10:55.246683] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.196 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.454 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:03.454 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:15:03.454 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:15:03.454 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:03.454 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:15:03.454 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.454 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.454 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.454 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:03.454 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:03.454 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:03.454 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:03.454 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:03.454 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:15:03.454 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:03.454 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:03.454 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.454 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:03.454 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.454 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:03.454 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.454 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:15:03.454 13:10:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:15:03.713 [2024-07-25 13:10:55.887103] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:03.713 [2024-07-25 13:10:55.887130] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:03.713 [2024-07-25 13:10:55.887163] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:03.713 [2024-07-25 13:10:55.893144] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:15:03.971 [2024-07-25 13:10:55.950011] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:03.971 [2024-07-25 13:10:55.950038] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.538 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:04.539 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:15:04.539 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:15:04.539 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:04.539 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:15:04.539 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.539 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.539 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.539 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:04.539 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:04.539 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:04.539 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:04.539 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:04.539 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:15:04.539 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:04.539 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:04.539 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:04.539 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.539 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.539 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:04.797 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.797 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:04.797 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:04.797 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:15:04.797 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:04.797 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:04.797 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:04.797 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:04.797 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.798 [2024-07-25 13:10:56.836602] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:04.798 [2024-07-25 13:10:56.836946] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:04.798 [2024-07-25 13:10:56.836973] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:04.798 [2024-07-25 13:10:56.842952] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:15:04.798 [2024-07-25 13:10:56.901229] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:04.798 [2024-07-25 13:10:56.901255] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:04.798 [2024-07-25 13:10:56.901262] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:04.798 13:10:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.057 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:15:05.057 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:05.057 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:15:05.057 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:05.057 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:05.057 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:05.057 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:05.057 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:05.057 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:05.057 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:15:05.057 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:05.057 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.057 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.057 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:05.057 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.057 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:05.057 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:05.057 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:15:05.057 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:05.057 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:05.057 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.057 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.057 [2024-07-25 13:10:57.073730] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:05.057 [2024-07-25 13:10:57.073775] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:05.057 [2024-07-25 13:10:57.077495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:05.057 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.057 [2024-07-25 13:10:57.077548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:05.057 [2024-07-25 13:10:57.077577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:05.057 [2024-07-25 13:10:57.077586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:05.057 [2024-07-25 13:10:57.077596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:05.057 [2024-07-25 13:10:57.077604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:05.057 [2024-07-25 13:10:57.077614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:05.057 [2024-07-25 13:10:57.077622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:05.058 [2024-07-25 13:10:57.077633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x729620 is same with the state(5) to be set 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:05.058 [2024-07-25 13:10:57.079748] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:15:05.058 [2024-07-25 13:10:57.079791] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:05.058 [2024-07-25 13:10:57.079888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x729620 (9): Bad file descriptor 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:05.058 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:05.317 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.318 13:10:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.707 [2024-07-25 13:10:58.507736] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:06.707 [2024-07-25 13:10:58.507761] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:06.708 [2024-07-25 13:10:58.507794] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:06.708 [2024-07-25 13:10:58.513765] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:15:06.708 [2024-07-25 13:10:58.574046] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:06.708 [2024-07-25 13:10:58.574099] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.708 request: 00:15:06.708 { 00:15:06.708 "name": "nvme", 00:15:06.708 "trtype": "tcp", 00:15:06.708 "traddr": "10.0.0.2", 00:15:06.708 "adrfam": "ipv4", 00:15:06.708 "trsvcid": "8009", 00:15:06.708 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:06.708 "wait_for_attach": true, 00:15:06.708 "method": "bdev_nvme_start_discovery", 00:15:06.708 "req_id": 1 00:15:06.708 } 00:15:06.708 Got JSON-RPC error response 00:15:06.708 response: 00:15:06.708 { 00:15:06.708 "code": -17, 00:15:06.708 "message": "File exists" 00:15:06.708 } 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.708 request: 00:15:06.708 { 00:15:06.708 "name": "nvme_second", 00:15:06.708 "trtype": "tcp", 00:15:06.708 "traddr": "10.0.0.2", 00:15:06.708 "adrfam": "ipv4", 00:15:06.708 "trsvcid": "8009", 00:15:06.708 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:06.708 "wait_for_attach": true, 00:15:06.708 "method": "bdev_nvme_start_discovery", 00:15:06.708 "req_id": 1 00:15:06.708 } 00:15:06.708 Got JSON-RPC error response 00:15:06.708 response: 00:15:06.708 { 00:15:06.708 "code": -17, 00:15:06.708 "message": "File exists" 00:15:06.708 } 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.708 13:10:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:08.083 [2024-07-25 13:10:59.851080] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:08.083 [2024-07-25 13:10:59.851147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x765c30 with addr=10.0.0.2, port=8010 00:15:08.083 [2024-07-25 13:10:59.851173] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:08.083 [2024-07-25 13:10:59.851184] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:08.083 [2024-07-25 13:10:59.851209] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:09.019 [2024-07-25 13:11:00.851043] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:09.019 [2024-07-25 13:11:00.851116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x765c30 with addr=10.0.0.2, port=8010 00:15:09.019 [2024-07-25 13:11:00.851134] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:09.019 [2024-07-25 13:11:00.851142] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:09.019 [2024-07-25 13:11:00.851150] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:09.954 [2024-07-25 13:11:01.850962] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:15:09.954 request: 00:15:09.954 { 00:15:09.954 "name": "nvme_second", 00:15:09.954 "trtype": "tcp", 00:15:09.954 "traddr": "10.0.0.2", 00:15:09.954 "adrfam": "ipv4", 00:15:09.954 "trsvcid": "8010", 00:15:09.954 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:09.954 "wait_for_attach": false, 00:15:09.954 "attach_timeout_ms": 3000, 00:15:09.954 "method": "bdev_nvme_start_discovery", 00:15:09.954 "req_id": 1 00:15:09.954 } 00:15:09.954 Got JSON-RPC error response 00:15:09.954 response: 00:15:09.954 { 00:15:09.954 "code": -110, 00:15:09.954 "message": "Connection timed out" 00:15:09.954 } 00:15:09.954 13:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:09.954 13:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:15:09.954 13:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:09.954 13:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:09.954 13:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:09.954 13:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:15:09.954 13:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:09.954 13:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:09.954 13:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.954 13:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:09.954 13:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:09.954 13:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:09.954 13:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.954 13:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:15:09.954 13:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:15:09.954 13:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75361 00:15:09.954 13:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:15:09.954 13:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:09.954 13:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:15:09.954 13:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:09.954 13:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:15:09.954 13:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:09.954 13:11:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:09.954 rmmod nvme_tcp 00:15:09.954 rmmod nvme_fabrics 00:15:09.955 rmmod nvme_keyring 00:15:09.955 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:09.955 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:15:09.955 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:15:09.955 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 75329 ']' 00:15:09.955 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 75329 00:15:09.955 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 75329 ']' 00:15:09.955 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 75329 00:15:09.955 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:15:09.955 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:09.955 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75329 00:15:09.955 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:09.955 killing process with pid 75329 00:15:09.955 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:09.955 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75329' 00:15:09.955 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 75329 00:15:09.955 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 75329 00:15:10.213 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:10.213 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:10.213 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:10.213 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:10.213 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:10.213 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.213 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:10.213 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.213 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:10.213 00:15:10.213 real 0m9.930s 00:15:10.213 user 0m19.324s 00:15:10.213 sys 0m1.907s 00:15:10.213 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:10.213 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.213 ************************************ 00:15:10.213 END TEST nvmf_host_discovery 00:15:10.213 ************************************ 00:15:10.213 13:11:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:10.213 13:11:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:10.213 13:11:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:10.213 13:11:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:10.213 ************************************ 00:15:10.213 START TEST nvmf_host_multipath_status 00:15:10.213 ************************************ 00:15:10.213 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:10.473 * Looking for test storage... 00:15:10.473 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:10.473 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:10.473 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:15:10.473 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:10.473 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:10.473 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:10.473 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:10.473 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:10.473 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:10.473 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:10.473 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:10.473 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:10.473 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:10.473 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:15:10.473 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:15:10.473 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:10.473 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:10.473 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:10.473 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:10.473 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:10.473 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:10.473 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:10.473 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:10.473 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.473 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:10.474 Cannot find device "nvmf_tgt_br" 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:10.474 Cannot find device "nvmf_tgt_br2" 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:10.474 Cannot find device "nvmf_tgt_br" 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:10.474 Cannot find device "nvmf_tgt_br2" 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:10.474 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:10.474 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:10.474 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:10.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:10.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:15:10.733 00:15:10.733 --- 10.0.0.2 ping statistics --- 00:15:10.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.733 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:10.733 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:10.733 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:15:10.733 00:15:10.733 --- 10.0.0.3 ping statistics --- 00:15:10.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.733 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:10.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:10.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:15:10.733 00:15:10.733 --- 10.0.0.1 ping statistics --- 00:15:10.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.733 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=75814 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 75814 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 75814 ']' 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:10.733 13:11:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:10.733 [2024-07-25 13:11:02.883407] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:10.734 [2024-07-25 13:11:02.883493] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.992 [2024-07-25 13:11:03.021965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:10.992 [2024-07-25 13:11:03.100379] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:10.992 [2024-07-25 13:11:03.100429] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:10.992 [2024-07-25 13:11:03.100454] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:10.992 [2024-07-25 13:11:03.100462] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:10.992 [2024-07-25 13:11:03.100468] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:10.992 [2024-07-25 13:11:03.100615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:10.992 [2024-07-25 13:11:03.100624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.992 [2024-07-25 13:11:03.152544] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:11.928 13:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:11.928 13:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:15:11.928 13:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:11.928 13:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:11.928 13:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:11.928 13:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:11.928 13:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=75814 00:15:11.928 13:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:11.928 [2024-07-25 13:11:04.063666] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:11.928 13:11:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:12.186 Malloc0 00:15:12.444 13:11:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:15:12.701 13:11:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:12.701 13:11:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:12.958 [2024-07-25 13:11:05.019107] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:12.958 13:11:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:13.216 [2024-07-25 13:11:05.219198] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:13.216 13:11:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=75864 00:15:13.216 13:11:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:15:13.216 13:11:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:13.216 13:11:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 75864 /var/tmp/bdevperf.sock 00:15:13.216 13:11:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 75864 ']' 00:15:13.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:13.216 13:11:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:13.216 13:11:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:13.216 13:11:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:13.216 13:11:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:13.216 13:11:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:14.148 13:11:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:14.148 13:11:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:15:14.148 13:11:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:15:14.406 13:11:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:15:14.664 Nvme0n1 00:15:14.664 13:11:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:15:14.923 Nvme0n1 00:15:14.923 13:11:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:15:14.923 13:11:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:15:16.822 13:11:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:15:16.822 13:11:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:17.079 13:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:17.336 13:11:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:15:18.281 13:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:15:18.281 13:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:18.539 13:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:18.539 13:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:18.798 13:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:18.798 13:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:18.798 13:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:18.798 13:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:18.798 13:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:18.798 13:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:19.057 13:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:19.057 13:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:19.057 13:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:19.057 13:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:19.057 13:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:19.057 13:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:19.315 13:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:19.315 13:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:19.315 13:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:19.315 13:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:19.573 13:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:19.573 13:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:19.573 13:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:19.573 13:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:19.831 13:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:19.831 13:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:15:19.832 13:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:20.090 13:11:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:20.348 13:11:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:15:21.296 13:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:15:21.297 13:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:21.297 13:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:21.297 13:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:21.565 13:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:21.565 13:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:21.565 13:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:21.565 13:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:21.823 13:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:21.823 13:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:21.823 13:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:21.823 13:11:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:22.081 13:11:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:22.081 13:11:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:22.081 13:11:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:22.081 13:11:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:22.339 13:11:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:22.339 13:11:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:22.339 13:11:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:22.339 13:11:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:22.597 13:11:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:22.597 13:11:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:22.597 13:11:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:22.597 13:11:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:22.597 13:11:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:22.597 13:11:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:15:22.597 13:11:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:22.855 13:11:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:15:23.112 13:11:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:15:24.047 13:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:15:24.047 13:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:24.305 13:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:24.306 13:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:24.306 13:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:24.306 13:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:24.306 13:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:24.306 13:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:24.563 13:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:24.563 13:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:24.563 13:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:24.563 13:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:24.821 13:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:24.821 13:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:24.821 13:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:24.822 13:11:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:25.080 13:11:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:25.080 13:11:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:25.080 13:11:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:25.080 13:11:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:25.339 13:11:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:25.339 13:11:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:25.339 13:11:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:25.339 13:11:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:25.596 13:11:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:25.596 13:11:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:15:25.596 13:11:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:25.854 13:11:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:15:26.112 13:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:15:27.047 13:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:15:27.047 13:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:27.047 13:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:27.047 13:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:27.306 13:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:27.306 13:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:27.306 13:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:27.306 13:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:27.564 13:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:27.564 13:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:27.564 13:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:27.564 13:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:27.822 13:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:27.822 13:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:27.822 13:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:27.822 13:11:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:28.080 13:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:28.080 13:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:28.080 13:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:28.080 13:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:28.338 13:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:28.338 13:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:28.338 13:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:28.338 13:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:28.597 13:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:28.597 13:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:15:28.597 13:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:28.855 13:11:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:15:29.113 13:11:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:15:30.051 13:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:15:30.051 13:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:30.051 13:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:30.051 13:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:30.310 13:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:30.310 13:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:30.310 13:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:30.310 13:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:30.568 13:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:30.568 13:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:30.568 13:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:30.568 13:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:30.826 13:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:30.826 13:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:30.826 13:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:30.826 13:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:30.826 13:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:30.826 13:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:15:30.826 13:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:30.826 13:11:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:31.085 13:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:31.085 13:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:31.085 13:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:31.085 13:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:31.344 13:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:31.344 13:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:15:31.344 13:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:31.603 13:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:31.862 13:11:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:15:32.799 13:11:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:15:32.799 13:11:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:32.799 13:11:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:32.799 13:11:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:33.058 13:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:33.058 13:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:33.058 13:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:33.058 13:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:33.317 13:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:33.317 13:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:33.317 13:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:33.317 13:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:33.574 13:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:33.574 13:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:33.574 13:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:33.574 13:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:33.832 13:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:33.832 13:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:15:33.832 13:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:33.832 13:11:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:34.089 13:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:34.089 13:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:34.089 13:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:34.089 13:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:34.347 13:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:34.347 13:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:15:34.613 13:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:15:34.613 13:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:34.887 13:11:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:34.887 13:11:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:15:36.264 13:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:15:36.264 13:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:36.264 13:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:36.264 13:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:36.264 13:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:36.264 13:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:36.264 13:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:36.264 13:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:36.523 13:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:36.523 13:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:36.523 13:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:36.523 13:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:36.781 13:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:36.781 13:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:36.781 13:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:36.781 13:11:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:37.039 13:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:37.039 13:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:37.039 13:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:37.039 13:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:37.296 13:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:37.297 13:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:37.297 13:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:37.297 13:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:37.554 13:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:37.554 13:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:15:37.555 13:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:37.813 13:11:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:38.072 13:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:15:39.008 13:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:15:39.008 13:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:39.008 13:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:39.008 13:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:39.267 13:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:39.267 13:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:39.267 13:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:39.267 13:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:39.525 13:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:39.526 13:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:39.526 13:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:39.526 13:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:39.784 13:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:39.784 13:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:39.784 13:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:39.784 13:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:40.042 13:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:40.042 13:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:40.042 13:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:40.042 13:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:40.042 13:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:40.042 13:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:40.042 13:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:40.042 13:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:40.301 13:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:40.301 13:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:15:40.301 13:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:40.560 13:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:15:40.819 13:11:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:15:41.762 13:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:15:41.762 13:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:41.762 13:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:41.762 13:11:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:42.021 13:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:42.021 13:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:42.021 13:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:42.021 13:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:42.279 13:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:42.280 13:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:42.280 13:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:42.280 13:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:42.537 13:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:42.537 13:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:42.537 13:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:42.537 13:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:42.796 13:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:42.796 13:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:42.796 13:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:42.796 13:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:42.796 13:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:42.796 13:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:42.796 13:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:42.796 13:11:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:43.055 13:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:43.055 13:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:15:43.055 13:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:43.313 13:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:15:43.573 13:11:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:15:44.509 13:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:15:44.509 13:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:44.509 13:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:44.509 13:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:44.767 13:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:44.767 13:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:44.767 13:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:44.767 13:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:45.040 13:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:45.040 13:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:45.040 13:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:45.040 13:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:45.329 13:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:45.329 13:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:45.329 13:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:45.329 13:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:45.587 13:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:45.587 13:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:45.587 13:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:45.587 13:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:45.845 13:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:45.845 13:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:45.845 13:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:45.845 13:11:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:46.104 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:46.104 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 75864 00:15:46.104 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 75864 ']' 00:15:46.104 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 75864 00:15:46.104 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:15:46.104 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:46.104 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75864 00:15:46.104 killing process with pid 75864 00:15:46.104 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:46.104 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:46.104 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75864' 00:15:46.104 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 75864 00:15:46.104 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 75864 00:15:46.104 Connection closed with partial response: 00:15:46.104 00:15:46.104 00:15:46.366 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 75864 00:15:46.366 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:46.366 [2024-07-25 13:11:05.279958] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:46.366 [2024-07-25 13:11:05.280118] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75864 ] 00:15:46.366 [2024-07-25 13:11:05.413666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.366 [2024-07-25 13:11:05.499469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:46.366 [2024-07-25 13:11:05.551607] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:46.366 Running I/O for 90 seconds... 00:15:46.366 [2024-07-25 13:11:20.867192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.366 [2024-07-25 13:11:20.867260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:15:46.366 [2024-07-25 13:11:20.867329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.366 [2024-07-25 13:11:20.867347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:15:46.366 [2024-07-25 13:11:20.867368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.366 [2024-07-25 13:11:20.867381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:15:46.366 [2024-07-25 13:11:20.867400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.366 [2024-07-25 13:11:20.867412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:15:46.366 [2024-07-25 13:11:20.867430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.366 [2024-07-25 13:11:20.867443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:15:46.366 [2024-07-25 13:11:20.867461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.366 [2024-07-25 13:11:20.867474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:15:46.366 [2024-07-25 13:11:20.867492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.366 [2024-07-25 13:11:20.867504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:15:46.366 [2024-07-25 13:11:20.867522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.366 [2024-07-25 13:11:20.867535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:15:46.366 [2024-07-25 13:11:20.867553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.366 [2024-07-25 13:11:20.867566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:15:46.366 [2024-07-25 13:11:20.867583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.366 [2024-07-25 13:11:20.867596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:46.366 [2024-07-25 13:11:20.867614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.366 [2024-07-25 13:11:20.867644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:15:46.366 [2024-07-25 13:11:20.867664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.366 [2024-07-25 13:11:20.867677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:15:46.366 [2024-07-25 13:11:20.867696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.366 [2024-07-25 13:11:20.867708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:15:46.366 [2024-07-25 13:11:20.867726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.366 [2024-07-25 13:11:20.867738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:15:46.366 [2024-07-25 13:11:20.867756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.366 [2024-07-25 13:11:20.867768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:15:46.366 [2024-07-25 13:11:20.867786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.366 [2024-07-25 13:11:20.867799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:15:46.366 [2024-07-25 13:11:20.867820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.366 [2024-07-25 13:11:20.867834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:15:46.366 [2024-07-25 13:11:20.867866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.366 [2024-07-25 13:11:20.867882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:15:46.366 [2024-07-25 13:11:20.867917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.366 [2024-07-25 13:11:20.867930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:15:46.366 [2024-07-25 13:11:20.867948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.366 [2024-07-25 13:11:20.867961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:15:46.366 [2024-07-25 13:11:20.867980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.366 [2024-07-25 13:11:20.867993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:15:46.366 [2024-07-25 13:11:20.868011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.366 [2024-07-25 13:11:20.868024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:15:46.366 [2024-07-25 13:11:20.868043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.366 [2024-07-25 13:11:20.868065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:15:46.366 [2024-07-25 13:11:20.868085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.366 [2024-07-25 13:11:20.868099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:15:46.366 [2024-07-25 13:11:20.868117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.366 [2024-07-25 13:11:20.868131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:15:46.366 [2024-07-25 13:11:20.868150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.366 [2024-07-25 13:11:20.868163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:15:46.366 [2024-07-25 13:11:20.868181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.366 [2024-07-25 13:11:20.868194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:15:46.366 [2024-07-25 13:11:20.868213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.367 [2024-07-25 13:11:20.868243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.868262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.367 [2024-07-25 13:11:20.868276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.868295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.367 [2024-07-25 13:11:20.868308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.868342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.367 [2024-07-25 13:11:20.868355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.868374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.367 [2024-07-25 13:11:20.868387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.868420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.367 [2024-07-25 13:11:20.868437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.868457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.367 [2024-07-25 13:11:20.868471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.868490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.367 [2024-07-25 13:11:20.868510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.868530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.367 [2024-07-25 13:11:20.868544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.868563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.367 [2024-07-25 13:11:20.868576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.868594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.367 [2024-07-25 13:11:20.868607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.868626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.367 [2024-07-25 13:11:20.868639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.868657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.367 [2024-07-25 13:11:20.868679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.868717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.367 [2024-07-25 13:11:20.868730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.868750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.367 [2024-07-25 13:11:20.868763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.868782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.367 [2024-07-25 13:11:20.868795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.868815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.367 [2024-07-25 13:11:20.868829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.868858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.367 [2024-07-25 13:11:20.868874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.868894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.367 [2024-07-25 13:11:20.868908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.868949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.367 [2024-07-25 13:11:20.868963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.868991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.367 [2024-07-25 13:11:20.869006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.869025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.367 [2024-07-25 13:11:20.869047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.869070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.367 [2024-07-25 13:11:20.869084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.869103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.367 [2024-07-25 13:11:20.869117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.869136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.367 [2024-07-25 13:11:20.869150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.869170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.367 [2024-07-25 13:11:20.869183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.869203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.367 [2024-07-25 13:11:20.869216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.869251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.367 [2024-07-25 13:11:20.869265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.869284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.367 [2024-07-25 13:11:20.869297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.869316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:100568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.367 [2024-07-25 13:11:20.869344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.869363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.367 [2024-07-25 13:11:20.869375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.869393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.367 [2024-07-25 13:11:20.869406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.869432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.367 [2024-07-25 13:11:20.869446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.869464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.367 [2024-07-25 13:11:20.869477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.869496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.367 [2024-07-25 13:11:20.869508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.869527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.367 [2024-07-25 13:11:20.869540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.869558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.367 [2024-07-25 13:11:20.869571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.869590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.367 [2024-07-25 13:11:20.869603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:15:46.367 [2024-07-25 13:11:20.869622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.368 [2024-07-25 13:11:20.869635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.869653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.368 [2024-07-25 13:11:20.869666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.869685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:100656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.368 [2024-07-25 13:11:20.869697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.869716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.368 [2024-07-25 13:11:20.869729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.869747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.368 [2024-07-25 13:11:20.869760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.869779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.368 [2024-07-25 13:11:20.869792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.869810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.368 [2024-07-25 13:11:20.869829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.869891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.368 [2024-07-25 13:11:20.869910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.869948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.368 [2024-07-25 13:11:20.869962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.869981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.368 [2024-07-25 13:11:20.869995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.870022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.368 [2024-07-25 13:11:20.870037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.870057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.368 [2024-07-25 13:11:20.870070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.870089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.368 [2024-07-25 13:11:20.870103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.870122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.368 [2024-07-25 13:11:20.870136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.870155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.368 [2024-07-25 13:11:20.870168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.870189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.368 [2024-07-25 13:11:20.870203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.870222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.368 [2024-07-25 13:11:20.870236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.870270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.368 [2024-07-25 13:11:20.870283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.870302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.368 [2024-07-25 13:11:20.870323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.870358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.368 [2024-07-25 13:11:20.870370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.870389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.368 [2024-07-25 13:11:20.870401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.870420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.368 [2024-07-25 13:11:20.870432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.870450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.368 [2024-07-25 13:11:20.870463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.870481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.368 [2024-07-25 13:11:20.870494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.870512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:100720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.368 [2024-07-25 13:11:20.870524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.870543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.368 [2024-07-25 13:11:20.870556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.870579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.368 [2024-07-25 13:11:20.870592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.870610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:100744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.368 [2024-07-25 13:11:20.870623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.870641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.368 [2024-07-25 13:11:20.870653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.870672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.368 [2024-07-25 13:11:20.870684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.870702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.368 [2024-07-25 13:11:20.870721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.870740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.368 [2024-07-25 13:11:20.870753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.870771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.368 [2024-07-25 13:11:20.870784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.870802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.368 [2024-07-25 13:11:20.870815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.870833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.368 [2024-07-25 13:11:20.870845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.870864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:101304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.368 [2024-07-25 13:11:20.870888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.870910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.368 [2024-07-25 13:11:20.870924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.870943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.368 [2024-07-25 13:11:20.870955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:15:46.368 [2024-07-25 13:11:20.870974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:101328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.368 [2024-07-25 13:11:20.870986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:20.871008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:101336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.369 [2024-07-25 13:11:20.871022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:20.871040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.369 [2024-07-25 13:11:20.871053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:20.871071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:101352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.369 [2024-07-25 13:11:20.871083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:20.871107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.369 [2024-07-25 13:11:20.871120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:20.871146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.369 [2024-07-25 13:11:20.871159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:20.871178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.369 [2024-07-25 13:11:20.871191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:20.871210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.369 [2024-07-25 13:11:20.871223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:20.871241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.369 [2024-07-25 13:11:20.871254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:20.871272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.369 [2024-07-25 13:11:20.871285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:20.871303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.369 [2024-07-25 13:11:20.871315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:20.871334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.369 [2024-07-25 13:11:20.871346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:20.871365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.369 [2024-07-25 13:11:20.871377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:20.871396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.369 [2024-07-25 13:11:20.871408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:20.871427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:100800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.369 [2024-07-25 13:11:20.871439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:20.871458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.369 [2024-07-25 13:11:20.871470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:20.872043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.369 [2024-07-25 13:11:20.872067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:20.872112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.369 [2024-07-25 13:11:20.872128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:20.872154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.369 [2024-07-25 13:11:20.872168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:20.872201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.369 [2024-07-25 13:11:20.872215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:20.872246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.369 [2024-07-25 13:11:20.872259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:20.872285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.369 [2024-07-25 13:11:20.872299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:20.872325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.369 [2024-07-25 13:11:20.872338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:20.872364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.369 [2024-07-25 13:11:20.872377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:20.872416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.369 [2024-07-25 13:11:20.872433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:35.623972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.369 [2024-07-25 13:11:35.624031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:35.624096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.369 [2024-07-25 13:11:35.624114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:35.624134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.369 [2024-07-25 13:11:35.624146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:35.624164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:107104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.369 [2024-07-25 13:11:35.624177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:35.624195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:107568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.369 [2024-07-25 13:11:35.624231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:35.624251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.369 [2024-07-25 13:11:35.624263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:35.624281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:107136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.369 [2024-07-25 13:11:35.624293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:35.624311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:107168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.369 [2024-07-25 13:11:35.624322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:35.624340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.369 [2024-07-25 13:11:35.624351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:35.624369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:107576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.369 [2024-07-25 13:11:35.624381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:35.624399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:107592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.369 [2024-07-25 13:11:35.624411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:35.624429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:107608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.369 [2024-07-25 13:11:35.624441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:35.624459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.369 [2024-07-25 13:11:35.624471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:35.624489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.369 [2024-07-25 13:11:35.624501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:15:46.369 [2024-07-25 13:11:35.624518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:107192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.370 [2024-07-25 13:11:35.624531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:15:46.370 [2024-07-25 13:11:35.624548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:107224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.370 [2024-07-25 13:11:35.624560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:15:46.370 [2024-07-25 13:11:35.624578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.370 [2024-07-25 13:11:35.624598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:15:46.370 [2024-07-25 13:11:35.624617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:107280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.370 [2024-07-25 13:11:35.624630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:15:46.370 [2024-07-25 13:11:35.624650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:107632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-07-25 13:11:35.624663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:15:46.370 [2024-07-25 13:11:35.624706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:107648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-07-25 13:11:35.624722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:15:46.370 [2024-07-25 13:11:35.624742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:107664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-07-25 13:11:35.624755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:15:46.370 [2024-07-25 13:11:35.624774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-07-25 13:11:35.624787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:15:46.370 [2024-07-25 13:11:35.624805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:107696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-07-25 13:11:35.624819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:15:46.370 [2024-07-25 13:11:35.624838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:107712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-07-25 13:11:35.624862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:15:46.370 [2024-07-25 13:11:35.624883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:107296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.370 [2024-07-25 13:11:35.624897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:15:46.370 [2024-07-25 13:11:35.624916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.370 [2024-07-25 13:11:35.624930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:15:46.370 [2024-07-25 13:11:35.624948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:107352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.370 [2024-07-25 13:11:35.624961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:15:46.370 [2024-07-25 13:11:35.624980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:107384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.370 [2024-07-25 13:11:35.624993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:15:46.370 [2024-07-25 13:11:35.625041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:107240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.370 [2024-07-25 13:11:35.625061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:15:46.370 [2024-07-25 13:11:35.625081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.370 [2024-07-25 13:11:35.625093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:15:46.370 [2024-07-25 13:11:35.625111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-07-25 13:11:35.625123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:15:46.370 [2024-07-25 13:11:35.625141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-07-25 13:11:35.625154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:15:46.370 [2024-07-25 13:11:35.625171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-07-25 13:11:35.625183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:15:46.370 [2024-07-25 13:11:35.625201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-07-25 13:11:35.625213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:15:46.370 [2024-07-25 13:11:35.625232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-07-25 13:11:35.625245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:15:46.370 [2024-07-25 13:11:35.625263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-07-25 13:11:35.625276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:15:46.370 [2024-07-25 13:11:35.625294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.370 [2024-07-25 13:11:35.625306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:46.370 [2024-07-25 13:11:35.625324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.389 [2024-07-25 13:11:35.625336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:15:46.389 [2024-07-25 13:11:35.625354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.389 [2024-07-25 13:11:35.625366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:15:46.389 [2024-07-25 13:11:35.625383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.389 [2024-07-25 13:11:35.625396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:15:46.389 [2024-07-25 13:11:35.625414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.390 [2024-07-25 13:11:35.625426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:15:46.390 [2024-07-25 13:11:35.625450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:107392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.390 [2024-07-25 13:11:35.625463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:15:46.390 [2024-07-25 13:11:35.625481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.390 [2024-07-25 13:11:35.625493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:15:46.390 [2024-07-25 13:11:35.625511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.390 [2024-07-25 13:11:35.625524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:15:46.390 [2024-07-25 13:11:35.625542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.390 [2024-07-25 13:11:35.625554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:15:46.390 [2024-07-25 13:11:35.625572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.390 [2024-07-25 13:11:35.625584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:15:46.390 [2024-07-25 13:11:35.625602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:107432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.390 [2024-07-25 13:11:35.625615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:15:46.390 [2024-07-25 13:11:35.627244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.390 [2024-07-25 13:11:35.627270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:15:46.390 [2024-07-25 13:11:35.627294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.390 [2024-07-25 13:11:35.627308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:15:46.390 [2024-07-25 13:11:35.627326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.390 [2024-07-25 13:11:35.627338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:15:46.390 [2024-07-25 13:11:35.627356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.390 [2024-07-25 13:11:35.627368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:15:46.390 [2024-07-25 13:11:35.627386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:107424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.390 [2024-07-25 13:11:35.627399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:15:46.390 [2024-07-25 13:11:35.627417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.390 [2024-07-25 13:11:35.627429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:15:46.390 [2024-07-25 13:11:35.627458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.390 [2024-07-25 13:11:35.627472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:15:46.390 [2024-07-25 13:11:35.627490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.390 [2024-07-25 13:11:35.627502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:15:46.390 [2024-07-25 13:11:35.627520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.390 [2024-07-25 13:11:35.627532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:15:46.390 [2024-07-25 13:11:35.627550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.390 [2024-07-25 13:11:35.627562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:15:46.390 [2024-07-25 13:11:35.627580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.390 [2024-07-25 13:11:35.627592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:15:46.390 [2024-07-25 13:11:35.627609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:107552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.390 [2024-07-25 13:11:35.627622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:15:46.390 [2024-07-25 13:11:35.627639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:107464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.390 [2024-07-25 13:11:35.627651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:15:46.390 [2024-07-25 13:11:35.627669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:107496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.390 [2024-07-25 13:11:35.627681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:15:46.390 [2024-07-25 13:11:35.627699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.390 [2024-07-25 13:11:35.627711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:15:46.390 [2024-07-25 13:11:35.627729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.390 [2024-07-25 13:11:35.627741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:15:46.390 [2024-07-25 13:11:35.627758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.390 [2024-07-25 13:11:35.627770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:15:46.390 [2024-07-25 13:11:35.627788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.390 [2024-07-25 13:11:35.627801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:15:46.390 Received shutdown signal, test time was about 31.102041 seconds 00:15:46.390 00:15:46.390 Latency(us) 00:15:46.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.390 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:46.390 Verification LBA range: start 0x0 length 0x4000 00:15:46.390 Nvme0n1 : 31.10 9730.59 38.01 0.00 0.00 13127.54 733.56 4026531.84 00:15:46.390 =================================================================================================================== 00:15:46.390 Total : 9730.59 38.01 0.00 0.00 13127.54 733.56 4026531.84 00:15:46.390 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:46.648 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:15:46.648 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:46.649 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:15:46.649 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:46.649 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:15:46.649 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:46.649 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:15:46.649 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:46.649 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:46.649 rmmod nvme_tcp 00:15:46.649 rmmod nvme_fabrics 00:15:46.649 rmmod nvme_keyring 00:15:46.649 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:46.649 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:15:46.649 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:15:46.649 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 75814 ']' 00:15:46.649 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 75814 00:15:46.649 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 75814 ']' 00:15:46.649 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 75814 00:15:46.649 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:15:46.649 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:46.649 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75814 00:15:46.649 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:46.649 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:46.649 killing process with pid 75814 00:15:46.649 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75814' 00:15:46.649 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 75814 00:15:46.649 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 75814 00:15:46.907 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:46.907 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:46.907 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:46.907 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:46.907 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:46.907 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.907 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:46.907 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.907 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:46.907 00:15:46.907 real 0m36.610s 00:15:46.907 user 1m57.615s 00:15:46.907 sys 0m10.601s 00:15:46.907 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:46.907 13:11:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:46.907 ************************************ 00:15:46.907 END TEST nvmf_host_multipath_status 00:15:46.907 ************************************ 00:15:46.907 13:11:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:46.907 13:11:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:46.907 13:11:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:46.907 13:11:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.907 ************************************ 00:15:46.907 START TEST nvmf_discovery_remove_ifc 00:15:46.907 ************************************ 00:15:46.907 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:46.907 * Looking for test storage... 00:15:46.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:46.907 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:46.907 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:15:46.907 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:46.907 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:46.907 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:46.907 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:46.907 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:46.907 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:46.907 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:46.907 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:46.907 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:46.907 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:47.166 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:15:47.166 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:15:47.166 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:47.166 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:47.166 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:47.166 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:47.166 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:47.166 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.166 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.166 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.166 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.166 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.166 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.166 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:15:47.166 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.166 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:15:47.166 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:47.166 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:47.166 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:47.166 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:47.166 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:47.166 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:47.166 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:47.166 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:47.166 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:15:47.166 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:15:47.166 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:15:47.166 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:47.166 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:47.167 Cannot find device "nvmf_tgt_br" 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:47.167 Cannot find device "nvmf_tgt_br2" 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:47.167 Cannot find device "nvmf_tgt_br" 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:47.167 Cannot find device "nvmf_tgt_br2" 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:47.167 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:47.167 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:47.167 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:47.426 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:47.426 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:47.426 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:47.426 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:47.426 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:47.426 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:47.426 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:47.426 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:47.426 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:47.426 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:47.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:47.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:15:47.426 00:15:47.426 --- 10.0.0.2 ping statistics --- 00:15:47.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.426 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:15:47.426 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:47.426 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:47.426 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:15:47.426 00:15:47.426 --- 10.0.0.3 ping statistics --- 00:15:47.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.426 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:15:47.426 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:47.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:47.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:15:47.426 00:15:47.426 --- 10.0.0.1 ping statistics --- 00:15:47.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.426 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:15:47.426 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:47.426 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:15:47.426 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:47.426 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:47.426 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:47.426 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:47.426 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:47.426 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:47.426 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:47.426 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:15:47.426 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:47.426 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:47.426 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:47.426 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=76627 00:15:47.426 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 76627 00:15:47.426 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 76627 ']' 00:15:47.427 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:47.427 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.427 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:47.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.427 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.427 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:47.427 13:11:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:47.427 [2024-07-25 13:11:39.519229] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:47.427 [2024-07-25 13:11:39.519329] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.685 [2024-07-25 13:11:39.655982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.685 [2024-07-25 13:11:39.730027] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:47.685 [2024-07-25 13:11:39.730078] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:47.685 [2024-07-25 13:11:39.730104] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:47.685 [2024-07-25 13:11:39.730112] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:47.685 [2024-07-25 13:11:39.730118] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:47.685 [2024-07-25 13:11:39.730143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.685 [2024-07-25 13:11:39.780067] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:48.253 13:11:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:48.253 13:11:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:15:48.253 13:11:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:48.511 13:11:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:48.511 13:11:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:48.511 13:11:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:48.511 13:11:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:15:48.511 13:11:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.511 13:11:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:48.511 [2024-07-25 13:11:40.493391] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:48.511 [2024-07-25 13:11:40.501480] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:48.511 null0 00:15:48.511 [2024-07-25 13:11:40.533422] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:48.511 13:11:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.511 13:11:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=76659 00:15:48.511 13:11:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:15:48.511 13:11:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 76659 /tmp/host.sock 00:15:48.511 13:11:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 76659 ']' 00:15:48.511 13:11:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:15:48.511 13:11:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:48.511 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:48.511 13:11:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:48.511 13:11:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:48.511 13:11:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:48.511 [2024-07-25 13:11:40.598512] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:48.511 [2024-07-25 13:11:40.598604] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76659 ] 00:15:48.770 [2024-07-25 13:11:40.730887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.770 [2024-07-25 13:11:40.827564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.705 13:11:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:49.705 13:11:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:15:49.705 13:11:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:49.705 13:11:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:15:49.705 13:11:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.705 13:11:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:49.705 13:11:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.705 13:11:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:15:49.705 13:11:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.705 13:11:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:49.705 [2024-07-25 13:11:41.650915] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:49.705 13:11:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.705 13:11:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:15:49.705 13:11:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.705 13:11:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:50.641 [2024-07-25 13:11:42.696304] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:50.641 [2024-07-25 13:11:42.696329] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:50.641 [2024-07-25 13:11:42.696362] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:50.641 [2024-07-25 13:11:42.702348] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:15:50.641 [2024-07-25 13:11:42.759147] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:50.641 [2024-07-25 13:11:42.759220] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:50.641 [2024-07-25 13:11:42.759248] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:50.641 [2024-07-25 13:11:42.759263] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:50.641 [2024-07-25 13:11:42.759283] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:50.641 13:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.641 13:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:15:50.641 13:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:50.641 [2024-07-25 13:11:42.765190] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1854ef0 was disconnected and freed. delete nvme_qpair. 00:15:50.641 13:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:50.641 13:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:50.641 13:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.641 13:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:50.641 13:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:50.641 13:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:50.641 13:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.641 13:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:15:50.641 13:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:15:50.641 13:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:15:50.913 13:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:15:50.913 13:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:50.913 13:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:50.913 13:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:50.913 13:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:50.913 13:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.913 13:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:50.913 13:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:50.913 13:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.913 13:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:50.913 13:11:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:51.907 13:11:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:51.907 13:11:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:51.907 13:11:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.907 13:11:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:51.907 13:11:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:51.907 13:11:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:51.907 13:11:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:51.907 13:11:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.907 13:11:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:51.907 13:11:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:52.842 13:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:52.842 13:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:52.842 13:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:52.842 13:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.842 13:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:52.842 13:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:52.842 13:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:52.842 13:11:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.842 13:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:52.842 13:11:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:54.214 13:11:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:54.214 13:11:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:54.214 13:11:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:54.214 13:11:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.214 13:11:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:54.214 13:11:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:54.214 13:11:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:54.214 13:11:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.214 13:11:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:54.214 13:11:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:55.148 13:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:55.148 13:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:55.148 13:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.148 13:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:55.148 13:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:55.148 13:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:55.148 13:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:55.148 13:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.148 13:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:55.148 13:11:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:56.082 13:11:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:56.082 13:11:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:56.083 13:11:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:56.083 13:11:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:56.083 13:11:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.083 13:11:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:56.083 13:11:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:56.083 13:11:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.083 [2024-07-25 13:11:48.187144] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:15:56.083 [2024-07-25 13:11:48.187231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:56.083 [2024-07-25 13:11:48.187246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.083 [2024-07-25 13:11:48.187258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:56.083 [2024-07-25 13:11:48.187266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.083 [2024-07-25 13:11:48.187275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:56.083 [2024-07-25 13:11:48.187283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.083 [2024-07-25 13:11:48.187292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:56.083 [2024-07-25 13:11:48.187300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.083 [2024-07-25 13:11:48.187309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:56.083 [2024-07-25 13:11:48.187318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.083 [2024-07-25 13:11:48.187326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17baac0 is same with the state(5) to be set 00:15:56.083 13:11:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:56.083 13:11:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:56.083 [2024-07-25 13:11:48.197146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17baac0 (9): Bad file descriptor 00:15:56.083 [2024-07-25 13:11:48.207164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:57.041 13:11:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:57.041 13:11:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:57.041 13:11:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:57.041 13:11:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:57.041 13:11:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.041 13:11:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:57.041 13:11:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:57.041 [2024-07-25 13:11:49.220912] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:15:57.041 [2024-07-25 13:11:49.220997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17baac0 with addr=10.0.0.2, port=4420 00:15:57.041 [2024-07-25 13:11:49.221027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17baac0 is same with the state(5) to be set 00:15:57.041 [2024-07-25 13:11:49.221081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17baac0 (9): Bad file descriptor 00:15:57.041 [2024-07-25 13:11:49.221848] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:57.041 [2024-07-25 13:11:49.221963] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:15:57.041 [2024-07-25 13:11:49.221984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:15:57.041 [2024-07-25 13:11:49.222003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:15:57.041 [2024-07-25 13:11:49.222048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:57.041 [2024-07-25 13:11:49.222069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:57.300 13:11:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.300 13:11:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:57.300 13:11:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:58.235 [2024-07-25 13:11:50.222130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:15:58.235 [2024-07-25 13:11:50.222175] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:15:58.235 [2024-07-25 13:11:50.222202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:15:58.235 [2024-07-25 13:11:50.222212] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:15:58.235 [2024-07-25 13:11:50.222234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:58.235 [2024-07-25 13:11:50.222279] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:15:58.235 [2024-07-25 13:11:50.222322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:58.235 [2024-07-25 13:11:50.222338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.235 [2024-07-25 13:11:50.222351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:58.235 [2024-07-25 13:11:50.222359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.235 [2024-07-25 13:11:50.222368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:58.235 [2024-07-25 13:11:50.222384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.235 [2024-07-25 13:11:50.222393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:58.235 [2024-07-25 13:11:50.222401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.235 [2024-07-25 13:11:50.222410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:58.235 [2024-07-25 13:11:50.222417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.235 [2024-07-25 13:11:50.222425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:15:58.235 [2024-07-25 13:11:50.222441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17be860 (9): Bad file descriptor 00:15:58.235 [2024-07-25 13:11:50.223249] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:15:58.235 [2024-07-25 13:11:50.223265] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:15:58.235 13:11:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:58.235 13:11:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:58.235 13:11:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:58.235 13:11:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.235 13:11:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:58.235 13:11:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:58.235 13:11:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:58.235 13:11:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.235 13:11:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:15:58.235 13:11:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:58.235 13:11:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:58.236 13:11:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:15:58.236 13:11:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:58.236 13:11:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:58.236 13:11:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:58.236 13:11:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:58.236 13:11:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.236 13:11:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:58.236 13:11:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:58.236 13:11:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.236 13:11:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:15:58.236 13:11:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:59.612 13:11:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:59.612 13:11:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:59.612 13:11:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.612 13:11:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:59.612 13:11:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:59.612 13:11:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:59.612 13:11:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:59.612 13:11:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.612 13:11:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:15:59.612 13:11:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:00.180 [2024-07-25 13:11:52.230266] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:00.180 [2024-07-25 13:11:52.230287] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:00.180 [2024-07-25 13:11:52.230303] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:00.180 [2024-07-25 13:11:52.236312] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:16:00.180 [2024-07-25 13:11:52.292327] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:00.180 [2024-07-25 13:11:52.292528] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:00.180 [2024-07-25 13:11:52.292591] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:00.180 [2024-07-25 13:11:52.292722] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:16:00.180 [2024-07-25 13:11:52.292927] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:00.180 [2024-07-25 13:11:52.299099] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1832460 was disconnected and freed. delete nvme_qpair. 00:16:00.439 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:00.439 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:00.439 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:00.439 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.439 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:00.439 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:00.439 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:00.439 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.439 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:16:00.439 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:16:00.439 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 76659 00:16:00.439 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 76659 ']' 00:16:00.439 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 76659 00:16:00.439 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:16:00.439 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:00.440 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76659 00:16:00.440 killing process with pid 76659 00:16:00.440 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:00.440 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:00.440 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76659' 00:16:00.440 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 76659 00:16:00.440 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 76659 00:16:00.699 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:00.699 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:00.699 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:16:00.699 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:00.699 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:16:00.699 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:00.699 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:00.699 rmmod nvme_tcp 00:16:00.699 rmmod nvme_fabrics 00:16:00.699 rmmod nvme_keyring 00:16:00.699 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:00.699 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:16:00.699 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:16:00.699 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 76627 ']' 00:16:00.699 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 76627 00:16:00.699 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 76627 ']' 00:16:00.699 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 76627 00:16:00.699 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:16:00.699 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:00.699 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76627 00:16:00.699 killing process with pid 76627 00:16:00.699 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:00.699 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:00.699 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76627' 00:16:00.699 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 76627 00:16:00.699 13:11:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 76627 00:16:00.958 13:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:00.958 13:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:00.958 13:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:00.958 13:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:00.958 13:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:00.958 13:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.958 13:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:00.958 13:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.958 13:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:00.958 00:16:00.958 real 0m14.094s 00:16:00.958 user 0m24.590s 00:16:00.958 sys 0m2.341s 00:16:00.958 13:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:00.958 13:11:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:00.958 ************************************ 00:16:00.958 END TEST nvmf_discovery_remove_ifc 00:16:00.958 ************************************ 00:16:00.958 13:11:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:00.958 13:11:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:00.958 13:11:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:00.958 13:11:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.218 ************************************ 00:16:01.218 START TEST nvmf_identify_kernel_target 00:16:01.218 ************************************ 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:01.218 * Looking for test storage... 00:16:01.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:01.218 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:01.219 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:01.219 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:01.219 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:01.219 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:01.219 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:01.219 Cannot find device "nvmf_tgt_br" 00:16:01.219 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:16:01.219 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:01.219 Cannot find device "nvmf_tgt_br2" 00:16:01.219 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:16:01.219 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:01.219 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:01.219 Cannot find device "nvmf_tgt_br" 00:16:01.219 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:16:01.219 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:01.219 Cannot find device "nvmf_tgt_br2" 00:16:01.219 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:16:01.219 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:01.219 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:01.219 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:01.219 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:01.219 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:16:01.219 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:01.219 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:01.219 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:16:01.219 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:01.219 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:01.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:01.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:16:01.478 00:16:01.478 --- 10.0.0.2 ping statistics --- 00:16:01.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.478 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:01.478 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:01.478 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:16:01.478 00:16:01.478 --- 10.0.0.3 ping statistics --- 00:16:01.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.478 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:01.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:01.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:16:01.478 00:16:01.478 --- 10.0.0.1 ping statistics --- 00:16:01.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.478 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:01.478 13:11:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:02.045 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:02.045 Waiting for block devices as requested 00:16:02.045 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:02.045 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:02.045 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:02.045 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:02.045 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:16:02.045 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:16:02.045 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:02.045 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:02.045 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:16:02.045 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:16:02.045 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:02.045 No valid GPT data, bailing 00:16:02.045 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:02.304 No valid GPT data, bailing 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:02.304 No valid GPT data, bailing 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:02.304 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:02.305 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:16:02.305 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:16:02.305 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:02.305 No valid GPT data, bailing 00:16:02.305 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:02.305 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:02.305 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:02.305 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:16:02.305 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:16:02.305 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:02.305 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:02.305 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:02.305 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:16:02.305 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:16:02.305 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:16:02.305 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:16:02.305 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:16:02.305 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:16:02.305 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:16:02.305 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:16:02.305 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:02.305 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid=0dc0e430-4c84-41eb-b0bf-db443f090fb1 -a 10.0.0.1 -t tcp -s 4420 00:16:02.564 00:16:02.564 Discovery Log Number of Records 2, Generation counter 2 00:16:02.564 =====Discovery Log Entry 0====== 00:16:02.564 trtype: tcp 00:16:02.564 adrfam: ipv4 00:16:02.564 subtype: current discovery subsystem 00:16:02.564 treq: not specified, sq flow control disable supported 00:16:02.564 portid: 1 00:16:02.564 trsvcid: 4420 00:16:02.564 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:02.564 traddr: 10.0.0.1 00:16:02.564 eflags: none 00:16:02.564 sectype: none 00:16:02.564 =====Discovery Log Entry 1====== 00:16:02.564 trtype: tcp 00:16:02.564 adrfam: ipv4 00:16:02.564 subtype: nvme subsystem 00:16:02.564 treq: not specified, sq flow control disable supported 00:16:02.564 portid: 1 00:16:02.564 trsvcid: 4420 00:16:02.564 subnqn: nqn.2016-06.io.spdk:testnqn 00:16:02.564 traddr: 10.0.0.1 00:16:02.564 eflags: none 00:16:02.564 sectype: none 00:16:02.564 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:16:02.564 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:16:02.564 ===================================================== 00:16:02.564 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:02.564 ===================================================== 00:16:02.564 Controller Capabilities/Features 00:16:02.564 ================================ 00:16:02.564 Vendor ID: 0000 00:16:02.564 Subsystem Vendor ID: 0000 00:16:02.564 Serial Number: 1496c944197496ccd37c 00:16:02.564 Model Number: Linux 00:16:02.564 Firmware Version: 6.7.0-68 00:16:02.564 Recommended Arb Burst: 0 00:16:02.564 IEEE OUI Identifier: 00 00 00 00:16:02.564 Multi-path I/O 00:16:02.564 May have multiple subsystem ports: No 00:16:02.564 May have multiple controllers: No 00:16:02.564 Associated with SR-IOV VF: No 00:16:02.564 Max Data Transfer Size: Unlimited 00:16:02.564 Max Number of Namespaces: 0 00:16:02.564 Max Number of I/O Queues: 1024 00:16:02.564 NVMe Specification Version (VS): 1.3 00:16:02.564 NVMe Specification Version (Identify): 1.3 00:16:02.564 Maximum Queue Entries: 1024 00:16:02.564 Contiguous Queues Required: No 00:16:02.564 Arbitration Mechanisms Supported 00:16:02.564 Weighted Round Robin: Not Supported 00:16:02.564 Vendor Specific: Not Supported 00:16:02.564 Reset Timeout: 7500 ms 00:16:02.564 Doorbell Stride: 4 bytes 00:16:02.564 NVM Subsystem Reset: Not Supported 00:16:02.564 Command Sets Supported 00:16:02.564 NVM Command Set: Supported 00:16:02.564 Boot Partition: Not Supported 00:16:02.564 Memory Page Size Minimum: 4096 bytes 00:16:02.564 Memory Page Size Maximum: 4096 bytes 00:16:02.564 Persistent Memory Region: Not Supported 00:16:02.564 Optional Asynchronous Events Supported 00:16:02.564 Namespace Attribute Notices: Not Supported 00:16:02.564 Firmware Activation Notices: Not Supported 00:16:02.564 ANA Change Notices: Not Supported 00:16:02.564 PLE Aggregate Log Change Notices: Not Supported 00:16:02.564 LBA Status Info Alert Notices: Not Supported 00:16:02.564 EGE Aggregate Log Change Notices: Not Supported 00:16:02.564 Normal NVM Subsystem Shutdown event: Not Supported 00:16:02.564 Zone Descriptor Change Notices: Not Supported 00:16:02.564 Discovery Log Change Notices: Supported 00:16:02.564 Controller Attributes 00:16:02.564 128-bit Host Identifier: Not Supported 00:16:02.564 Non-Operational Permissive Mode: Not Supported 00:16:02.564 NVM Sets: Not Supported 00:16:02.564 Read Recovery Levels: Not Supported 00:16:02.564 Endurance Groups: Not Supported 00:16:02.564 Predictable Latency Mode: Not Supported 00:16:02.564 Traffic Based Keep ALive: Not Supported 00:16:02.564 Namespace Granularity: Not Supported 00:16:02.564 SQ Associations: Not Supported 00:16:02.564 UUID List: Not Supported 00:16:02.564 Multi-Domain Subsystem: Not Supported 00:16:02.564 Fixed Capacity Management: Not Supported 00:16:02.564 Variable Capacity Management: Not Supported 00:16:02.564 Delete Endurance Group: Not Supported 00:16:02.564 Delete NVM Set: Not Supported 00:16:02.564 Extended LBA Formats Supported: Not Supported 00:16:02.564 Flexible Data Placement Supported: Not Supported 00:16:02.564 00:16:02.564 Controller Memory Buffer Support 00:16:02.564 ================================ 00:16:02.564 Supported: No 00:16:02.564 00:16:02.564 Persistent Memory Region Support 00:16:02.564 ================================ 00:16:02.564 Supported: No 00:16:02.564 00:16:02.564 Admin Command Set Attributes 00:16:02.564 ============================ 00:16:02.564 Security Send/Receive: Not Supported 00:16:02.564 Format NVM: Not Supported 00:16:02.564 Firmware Activate/Download: Not Supported 00:16:02.564 Namespace Management: Not Supported 00:16:02.564 Device Self-Test: Not Supported 00:16:02.564 Directives: Not Supported 00:16:02.564 NVMe-MI: Not Supported 00:16:02.564 Virtualization Management: Not Supported 00:16:02.564 Doorbell Buffer Config: Not Supported 00:16:02.564 Get LBA Status Capability: Not Supported 00:16:02.564 Command & Feature Lockdown Capability: Not Supported 00:16:02.564 Abort Command Limit: 1 00:16:02.564 Async Event Request Limit: 1 00:16:02.564 Number of Firmware Slots: N/A 00:16:02.564 Firmware Slot 1 Read-Only: N/A 00:16:02.564 Firmware Activation Without Reset: N/A 00:16:02.564 Multiple Update Detection Support: N/A 00:16:02.564 Firmware Update Granularity: No Information Provided 00:16:02.564 Per-Namespace SMART Log: No 00:16:02.564 Asymmetric Namespace Access Log Page: Not Supported 00:16:02.564 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:02.564 Command Effects Log Page: Not Supported 00:16:02.564 Get Log Page Extended Data: Supported 00:16:02.564 Telemetry Log Pages: Not Supported 00:16:02.564 Persistent Event Log Pages: Not Supported 00:16:02.564 Supported Log Pages Log Page: May Support 00:16:02.564 Commands Supported & Effects Log Page: Not Supported 00:16:02.564 Feature Identifiers & Effects Log Page:May Support 00:16:02.564 NVMe-MI Commands & Effects Log Page: May Support 00:16:02.564 Data Area 4 for Telemetry Log: Not Supported 00:16:02.564 Error Log Page Entries Supported: 1 00:16:02.564 Keep Alive: Not Supported 00:16:02.564 00:16:02.564 NVM Command Set Attributes 00:16:02.564 ========================== 00:16:02.564 Submission Queue Entry Size 00:16:02.564 Max: 1 00:16:02.564 Min: 1 00:16:02.564 Completion Queue Entry Size 00:16:02.564 Max: 1 00:16:02.564 Min: 1 00:16:02.564 Number of Namespaces: 0 00:16:02.564 Compare Command: Not Supported 00:16:02.564 Write Uncorrectable Command: Not Supported 00:16:02.564 Dataset Management Command: Not Supported 00:16:02.564 Write Zeroes Command: Not Supported 00:16:02.564 Set Features Save Field: Not Supported 00:16:02.564 Reservations: Not Supported 00:16:02.564 Timestamp: Not Supported 00:16:02.564 Copy: Not Supported 00:16:02.564 Volatile Write Cache: Not Present 00:16:02.564 Atomic Write Unit (Normal): 1 00:16:02.564 Atomic Write Unit (PFail): 1 00:16:02.564 Atomic Compare & Write Unit: 1 00:16:02.565 Fused Compare & Write: Not Supported 00:16:02.565 Scatter-Gather List 00:16:02.565 SGL Command Set: Supported 00:16:02.565 SGL Keyed: Not Supported 00:16:02.565 SGL Bit Bucket Descriptor: Not Supported 00:16:02.565 SGL Metadata Pointer: Not Supported 00:16:02.565 Oversized SGL: Not Supported 00:16:02.565 SGL Metadata Address: Not Supported 00:16:02.565 SGL Offset: Supported 00:16:02.565 Transport SGL Data Block: Not Supported 00:16:02.565 Replay Protected Memory Block: Not Supported 00:16:02.565 00:16:02.565 Firmware Slot Information 00:16:02.565 ========================= 00:16:02.565 Active slot: 0 00:16:02.565 00:16:02.565 00:16:02.565 Error Log 00:16:02.565 ========= 00:16:02.565 00:16:02.565 Active Namespaces 00:16:02.565 ================= 00:16:02.565 Discovery Log Page 00:16:02.565 ================== 00:16:02.565 Generation Counter: 2 00:16:02.565 Number of Records: 2 00:16:02.565 Record Format: 0 00:16:02.565 00:16:02.565 Discovery Log Entry 0 00:16:02.565 ---------------------- 00:16:02.565 Transport Type: 3 (TCP) 00:16:02.565 Address Family: 1 (IPv4) 00:16:02.565 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:02.565 Entry Flags: 00:16:02.565 Duplicate Returned Information: 0 00:16:02.565 Explicit Persistent Connection Support for Discovery: 0 00:16:02.565 Transport Requirements: 00:16:02.565 Secure Channel: Not Specified 00:16:02.565 Port ID: 1 (0x0001) 00:16:02.565 Controller ID: 65535 (0xffff) 00:16:02.565 Admin Max SQ Size: 32 00:16:02.565 Transport Service Identifier: 4420 00:16:02.565 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:02.565 Transport Address: 10.0.0.1 00:16:02.565 Discovery Log Entry 1 00:16:02.565 ---------------------- 00:16:02.565 Transport Type: 3 (TCP) 00:16:02.565 Address Family: 1 (IPv4) 00:16:02.565 Subsystem Type: 2 (NVM Subsystem) 00:16:02.565 Entry Flags: 00:16:02.565 Duplicate Returned Information: 0 00:16:02.565 Explicit Persistent Connection Support for Discovery: 0 00:16:02.565 Transport Requirements: 00:16:02.565 Secure Channel: Not Specified 00:16:02.565 Port ID: 1 (0x0001) 00:16:02.565 Controller ID: 65535 (0xffff) 00:16:02.565 Admin Max SQ Size: 32 00:16:02.565 Transport Service Identifier: 4420 00:16:02.565 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:16:02.565 Transport Address: 10.0.0.1 00:16:02.565 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:16:02.824 get_feature(0x01) failed 00:16:02.824 get_feature(0x02) failed 00:16:02.824 get_feature(0x04) failed 00:16:02.824 ===================================================== 00:16:02.824 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:16:02.824 ===================================================== 00:16:02.824 Controller Capabilities/Features 00:16:02.824 ================================ 00:16:02.824 Vendor ID: 0000 00:16:02.824 Subsystem Vendor ID: 0000 00:16:02.824 Serial Number: 87220b166e04081022d4 00:16:02.824 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:16:02.824 Firmware Version: 6.7.0-68 00:16:02.824 Recommended Arb Burst: 6 00:16:02.825 IEEE OUI Identifier: 00 00 00 00:16:02.825 Multi-path I/O 00:16:02.825 May have multiple subsystem ports: Yes 00:16:02.825 May have multiple controllers: Yes 00:16:02.825 Associated with SR-IOV VF: No 00:16:02.825 Max Data Transfer Size: Unlimited 00:16:02.825 Max Number of Namespaces: 1024 00:16:02.825 Max Number of I/O Queues: 128 00:16:02.825 NVMe Specification Version (VS): 1.3 00:16:02.825 NVMe Specification Version (Identify): 1.3 00:16:02.825 Maximum Queue Entries: 1024 00:16:02.825 Contiguous Queues Required: No 00:16:02.825 Arbitration Mechanisms Supported 00:16:02.825 Weighted Round Robin: Not Supported 00:16:02.825 Vendor Specific: Not Supported 00:16:02.825 Reset Timeout: 7500 ms 00:16:02.825 Doorbell Stride: 4 bytes 00:16:02.825 NVM Subsystem Reset: Not Supported 00:16:02.825 Command Sets Supported 00:16:02.825 NVM Command Set: Supported 00:16:02.825 Boot Partition: Not Supported 00:16:02.825 Memory Page Size Minimum: 4096 bytes 00:16:02.825 Memory Page Size Maximum: 4096 bytes 00:16:02.825 Persistent Memory Region: Not Supported 00:16:02.825 Optional Asynchronous Events Supported 00:16:02.825 Namespace Attribute Notices: Supported 00:16:02.825 Firmware Activation Notices: Not Supported 00:16:02.825 ANA Change Notices: Supported 00:16:02.825 PLE Aggregate Log Change Notices: Not Supported 00:16:02.825 LBA Status Info Alert Notices: Not Supported 00:16:02.825 EGE Aggregate Log Change Notices: Not Supported 00:16:02.825 Normal NVM Subsystem Shutdown event: Not Supported 00:16:02.825 Zone Descriptor Change Notices: Not Supported 00:16:02.825 Discovery Log Change Notices: Not Supported 00:16:02.825 Controller Attributes 00:16:02.825 128-bit Host Identifier: Supported 00:16:02.825 Non-Operational Permissive Mode: Not Supported 00:16:02.825 NVM Sets: Not Supported 00:16:02.825 Read Recovery Levels: Not Supported 00:16:02.825 Endurance Groups: Not Supported 00:16:02.825 Predictable Latency Mode: Not Supported 00:16:02.825 Traffic Based Keep ALive: Supported 00:16:02.825 Namespace Granularity: Not Supported 00:16:02.825 SQ Associations: Not Supported 00:16:02.825 UUID List: Not Supported 00:16:02.825 Multi-Domain Subsystem: Not Supported 00:16:02.825 Fixed Capacity Management: Not Supported 00:16:02.825 Variable Capacity Management: Not Supported 00:16:02.825 Delete Endurance Group: Not Supported 00:16:02.825 Delete NVM Set: Not Supported 00:16:02.825 Extended LBA Formats Supported: Not Supported 00:16:02.825 Flexible Data Placement Supported: Not Supported 00:16:02.825 00:16:02.825 Controller Memory Buffer Support 00:16:02.825 ================================ 00:16:02.825 Supported: No 00:16:02.825 00:16:02.825 Persistent Memory Region Support 00:16:02.825 ================================ 00:16:02.825 Supported: No 00:16:02.825 00:16:02.825 Admin Command Set Attributes 00:16:02.825 ============================ 00:16:02.825 Security Send/Receive: Not Supported 00:16:02.825 Format NVM: Not Supported 00:16:02.825 Firmware Activate/Download: Not Supported 00:16:02.825 Namespace Management: Not Supported 00:16:02.825 Device Self-Test: Not Supported 00:16:02.825 Directives: Not Supported 00:16:02.825 NVMe-MI: Not Supported 00:16:02.825 Virtualization Management: Not Supported 00:16:02.825 Doorbell Buffer Config: Not Supported 00:16:02.825 Get LBA Status Capability: Not Supported 00:16:02.825 Command & Feature Lockdown Capability: Not Supported 00:16:02.825 Abort Command Limit: 4 00:16:02.825 Async Event Request Limit: 4 00:16:02.825 Number of Firmware Slots: N/A 00:16:02.825 Firmware Slot 1 Read-Only: N/A 00:16:02.825 Firmware Activation Without Reset: N/A 00:16:02.825 Multiple Update Detection Support: N/A 00:16:02.825 Firmware Update Granularity: No Information Provided 00:16:02.825 Per-Namespace SMART Log: Yes 00:16:02.825 Asymmetric Namespace Access Log Page: Supported 00:16:02.825 ANA Transition Time : 10 sec 00:16:02.825 00:16:02.825 Asymmetric Namespace Access Capabilities 00:16:02.825 ANA Optimized State : Supported 00:16:02.825 ANA Non-Optimized State : Supported 00:16:02.825 ANA Inaccessible State : Supported 00:16:02.825 ANA Persistent Loss State : Supported 00:16:02.825 ANA Change State : Supported 00:16:02.825 ANAGRPID is not changed : No 00:16:02.825 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:16:02.825 00:16:02.825 ANA Group Identifier Maximum : 128 00:16:02.825 Number of ANA Group Identifiers : 128 00:16:02.825 Max Number of Allowed Namespaces : 1024 00:16:02.825 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:16:02.825 Command Effects Log Page: Supported 00:16:02.825 Get Log Page Extended Data: Supported 00:16:02.825 Telemetry Log Pages: Not Supported 00:16:02.825 Persistent Event Log Pages: Not Supported 00:16:02.825 Supported Log Pages Log Page: May Support 00:16:02.825 Commands Supported & Effects Log Page: Not Supported 00:16:02.825 Feature Identifiers & Effects Log Page:May Support 00:16:02.825 NVMe-MI Commands & Effects Log Page: May Support 00:16:02.825 Data Area 4 for Telemetry Log: Not Supported 00:16:02.825 Error Log Page Entries Supported: 128 00:16:02.825 Keep Alive: Supported 00:16:02.825 Keep Alive Granularity: 1000 ms 00:16:02.825 00:16:02.825 NVM Command Set Attributes 00:16:02.825 ========================== 00:16:02.825 Submission Queue Entry Size 00:16:02.825 Max: 64 00:16:02.825 Min: 64 00:16:02.825 Completion Queue Entry Size 00:16:02.825 Max: 16 00:16:02.825 Min: 16 00:16:02.825 Number of Namespaces: 1024 00:16:02.825 Compare Command: Not Supported 00:16:02.825 Write Uncorrectable Command: Not Supported 00:16:02.825 Dataset Management Command: Supported 00:16:02.825 Write Zeroes Command: Supported 00:16:02.825 Set Features Save Field: Not Supported 00:16:02.825 Reservations: Not Supported 00:16:02.825 Timestamp: Not Supported 00:16:02.825 Copy: Not Supported 00:16:02.825 Volatile Write Cache: Present 00:16:02.825 Atomic Write Unit (Normal): 1 00:16:02.825 Atomic Write Unit (PFail): 1 00:16:02.825 Atomic Compare & Write Unit: 1 00:16:02.825 Fused Compare & Write: Not Supported 00:16:02.825 Scatter-Gather List 00:16:02.825 SGL Command Set: Supported 00:16:02.825 SGL Keyed: Not Supported 00:16:02.825 SGL Bit Bucket Descriptor: Not Supported 00:16:02.825 SGL Metadata Pointer: Not Supported 00:16:02.825 Oversized SGL: Not Supported 00:16:02.825 SGL Metadata Address: Not Supported 00:16:02.825 SGL Offset: Supported 00:16:02.825 Transport SGL Data Block: Not Supported 00:16:02.825 Replay Protected Memory Block: Not Supported 00:16:02.825 00:16:02.825 Firmware Slot Information 00:16:02.825 ========================= 00:16:02.825 Active slot: 0 00:16:02.825 00:16:02.825 Asymmetric Namespace Access 00:16:02.825 =========================== 00:16:02.825 Change Count : 0 00:16:02.825 Number of ANA Group Descriptors : 1 00:16:02.825 ANA Group Descriptor : 0 00:16:02.825 ANA Group ID : 1 00:16:02.825 Number of NSID Values : 1 00:16:02.825 Change Count : 0 00:16:02.825 ANA State : 1 00:16:02.825 Namespace Identifier : 1 00:16:02.825 00:16:02.825 Commands Supported and Effects 00:16:02.825 ============================== 00:16:02.825 Admin Commands 00:16:02.825 -------------- 00:16:02.825 Get Log Page (02h): Supported 00:16:02.825 Identify (06h): Supported 00:16:02.825 Abort (08h): Supported 00:16:02.825 Set Features (09h): Supported 00:16:02.825 Get Features (0Ah): Supported 00:16:02.825 Asynchronous Event Request (0Ch): Supported 00:16:02.825 Keep Alive (18h): Supported 00:16:02.825 I/O Commands 00:16:02.825 ------------ 00:16:02.825 Flush (00h): Supported 00:16:02.825 Write (01h): Supported LBA-Change 00:16:02.825 Read (02h): Supported 00:16:02.825 Write Zeroes (08h): Supported LBA-Change 00:16:02.825 Dataset Management (09h): Supported 00:16:02.825 00:16:02.825 Error Log 00:16:02.825 ========= 00:16:02.825 Entry: 0 00:16:02.826 Error Count: 0x3 00:16:02.826 Submission Queue Id: 0x0 00:16:02.826 Command Id: 0x5 00:16:02.826 Phase Bit: 0 00:16:02.826 Status Code: 0x2 00:16:02.826 Status Code Type: 0x0 00:16:02.826 Do Not Retry: 1 00:16:02.826 Error Location: 0x28 00:16:02.826 LBA: 0x0 00:16:02.826 Namespace: 0x0 00:16:02.826 Vendor Log Page: 0x0 00:16:02.826 ----------- 00:16:02.826 Entry: 1 00:16:02.826 Error Count: 0x2 00:16:02.826 Submission Queue Id: 0x0 00:16:02.826 Command Id: 0x5 00:16:02.826 Phase Bit: 0 00:16:02.826 Status Code: 0x2 00:16:02.826 Status Code Type: 0x0 00:16:02.826 Do Not Retry: 1 00:16:02.826 Error Location: 0x28 00:16:02.826 LBA: 0x0 00:16:02.826 Namespace: 0x0 00:16:02.826 Vendor Log Page: 0x0 00:16:02.826 ----------- 00:16:02.826 Entry: 2 00:16:02.826 Error Count: 0x1 00:16:02.826 Submission Queue Id: 0x0 00:16:02.826 Command Id: 0x4 00:16:02.826 Phase Bit: 0 00:16:02.826 Status Code: 0x2 00:16:02.826 Status Code Type: 0x0 00:16:02.826 Do Not Retry: 1 00:16:02.826 Error Location: 0x28 00:16:02.826 LBA: 0x0 00:16:02.826 Namespace: 0x0 00:16:02.826 Vendor Log Page: 0x0 00:16:02.826 00:16:02.826 Number of Queues 00:16:02.826 ================ 00:16:02.826 Number of I/O Submission Queues: 128 00:16:02.826 Number of I/O Completion Queues: 128 00:16:02.826 00:16:02.826 ZNS Specific Controller Data 00:16:02.826 ============================ 00:16:02.826 Zone Append Size Limit: 0 00:16:02.826 00:16:02.826 00:16:02.826 Active Namespaces 00:16:02.826 ================= 00:16:02.826 get_feature(0x05) failed 00:16:02.826 Namespace ID:1 00:16:02.826 Command Set Identifier: NVM (00h) 00:16:02.826 Deallocate: Supported 00:16:02.826 Deallocated/Unwritten Error: Not Supported 00:16:02.826 Deallocated Read Value: Unknown 00:16:02.826 Deallocate in Write Zeroes: Not Supported 00:16:02.826 Deallocated Guard Field: 0xFFFF 00:16:02.826 Flush: Supported 00:16:02.826 Reservation: Not Supported 00:16:02.826 Namespace Sharing Capabilities: Multiple Controllers 00:16:02.826 Size (in LBAs): 1310720 (5GiB) 00:16:02.826 Capacity (in LBAs): 1310720 (5GiB) 00:16:02.826 Utilization (in LBAs): 1310720 (5GiB) 00:16:02.826 UUID: 3594d637-5aa4-4e91-9922-eaf288b57e18 00:16:02.826 Thin Provisioning: Not Supported 00:16:02.826 Per-NS Atomic Units: Yes 00:16:02.826 Atomic Boundary Size (Normal): 0 00:16:02.826 Atomic Boundary Size (PFail): 0 00:16:02.826 Atomic Boundary Offset: 0 00:16:02.826 NGUID/EUI64 Never Reused: No 00:16:02.826 ANA group ID: 1 00:16:02.826 Namespace Write Protected: No 00:16:02.826 Number of LBA Formats: 1 00:16:02.826 Current LBA Format: LBA Format #00 00:16:02.826 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:16:02.826 00:16:02.826 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:16:02.826 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:02.826 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:16:02.826 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:02.826 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:16:02.826 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:02.826 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:02.826 rmmod nvme_tcp 00:16:02.826 rmmod nvme_fabrics 00:16:02.826 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:02.826 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:16:02.826 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:16:02.826 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:02.826 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:02.826 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:02.826 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:02.826 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:02.826 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:02.826 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.826 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:02.826 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.826 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:02.826 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:16:02.826 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:16:02.826 13:11:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:16:02.826 13:11:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:02.826 13:11:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:02.826 13:11:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:03.085 13:11:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:03.085 13:11:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:16:03.085 13:11:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:16:03.085 13:11:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:03.652 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:03.652 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:03.652 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:03.911 00:16:03.911 real 0m2.728s 00:16:03.911 user 0m0.932s 00:16:03.911 sys 0m1.297s 00:16:03.911 13:11:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:03.911 13:11:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.911 ************************************ 00:16:03.911 END TEST nvmf_identify_kernel_target 00:16:03.911 ************************************ 00:16:03.911 13:11:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:03.911 13:11:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:03.911 13:11:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:03.911 13:11:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:03.911 ************************************ 00:16:03.911 START TEST nvmf_auth_host 00:16:03.911 ************************************ 00:16:03.911 13:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:03.911 * Looking for test storage... 00:16:03.912 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:03.912 13:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:03.912 13:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:03.912 Cannot find device "nvmf_tgt_br" 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:03.912 Cannot find device "nvmf_tgt_br2" 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:03.912 Cannot find device "nvmf_tgt_br" 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:16:03.912 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:04.172 Cannot find device "nvmf_tgt_br2" 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:04.172 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:04.172 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:04.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:04.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:16:04.172 00:16:04.172 --- 10.0.0.2 ping statistics --- 00:16:04.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.172 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:04.172 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:04.172 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:16:04.172 00:16:04.172 --- 10.0.0.3 ping statistics --- 00:16:04.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.172 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:04.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:04.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:16:04.172 00:16:04.172 --- 10.0.0.1 ping statistics --- 00:16:04.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.172 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:04.172 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:04.431 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:16:04.431 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:04.431 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:04.431 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.431 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=77535 00:16:04.431 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:16:04.431 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 77535 00:16:04.431 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 77535 ']' 00:16:04.431 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.431 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:04.431 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.431 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:04.431 13:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.367 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:05.367 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:16:05.367 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:05.367 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:05.367 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.367 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:05.367 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:16:05.367 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:16:05.367 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:05.367 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:05.367 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:05.367 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:16:05.367 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:05.367 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:05.367 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e975f9ac76d399154c5032eaf0745cb3 00:16:05.367 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:05.367 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.fjj 00:16:05.367 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e975f9ac76d399154c5032eaf0745cb3 0 00:16:05.367 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e975f9ac76d399154c5032eaf0745cb3 0 00:16:05.367 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:05.367 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:05.367 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e975f9ac76d399154c5032eaf0745cb3 00:16:05.367 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:16:05.367 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:05.367 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.fjj 00:16:05.367 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.fjj 00:16:05.367 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.fjj 00:16:05.367 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:16:05.367 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:05.368 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:05.368 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:05.368 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:16:05.368 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:16:05.368 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:05.368 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e4daf7ee5ecf1dbc4087a207583b9c063cbe834b1d66471a781cd27e24d1e170 00:16:05.368 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:05.368 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.dMN 00:16:05.368 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e4daf7ee5ecf1dbc4087a207583b9c063cbe834b1d66471a781cd27e24d1e170 3 00:16:05.368 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e4daf7ee5ecf1dbc4087a207583b9c063cbe834b1d66471a781cd27e24d1e170 3 00:16:05.368 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:05.368 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:05.368 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e4daf7ee5ecf1dbc4087a207583b9c063cbe834b1d66471a781cd27e24d1e170 00:16:05.368 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:16:05.368 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:05.627 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.dMN 00:16:05.627 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.dMN 00:16:05.627 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.dMN 00:16:05.627 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:16:05.627 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:05.627 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:05.627 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:05.627 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:16:05.627 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:16:05.627 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:05.627 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ccd161f6a13f0614d007431c6f167f40469afd056527e623 00:16:05.627 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:05.627 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.HTY 00:16:05.627 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ccd161f6a13f0614d007431c6f167f40469afd056527e623 0 00:16:05.627 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ccd161f6a13f0614d007431c6f167f40469afd056527e623 0 00:16:05.627 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:05.627 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:05.627 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ccd161f6a13f0614d007431c6f167f40469afd056527e623 00:16:05.627 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:16:05.627 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:05.627 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.HTY 00:16:05.627 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.HTY 00:16:05.627 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.HTY 00:16:05.627 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:16:05.627 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7259ed133b4aeb51e51b4684ce32fad7d3001aee3f2c36b2 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.XQj 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7259ed133b4aeb51e51b4684ce32fad7d3001aee3f2c36b2 2 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7259ed133b4aeb51e51b4684ce32fad7d3001aee3f2c36b2 2 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7259ed133b4aeb51e51b4684ce32fad7d3001aee3f2c36b2 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.XQj 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.XQj 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.XQj 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=af8eefe71c957664d7c0befa5d18b22d 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.oS3 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key af8eefe71c957664d7c0befa5d18b22d 1 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 af8eefe71c957664d7c0befa5d18b22d 1 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=af8eefe71c957664d7c0befa5d18b22d 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.oS3 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.oS3 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.oS3 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6c01044664bb0839b495be6b7545940a 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.znt 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6c01044664bb0839b495be6b7545940a 1 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6c01044664bb0839b495be6b7545940a 1 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6c01044664bb0839b495be6b7545940a 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:16:05.628 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:05.887 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.znt 00:16:05.887 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.znt 00:16:05.887 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.znt 00:16:05.887 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:16:05.887 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:05.887 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=01103faa79a5d54c27efbe886ffdbd92dad9fab15ed39503 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.9B7 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 01103faa79a5d54c27efbe886ffdbd92dad9fab15ed39503 2 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 01103faa79a5d54c27efbe886ffdbd92dad9fab15ed39503 2 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=01103faa79a5d54c27efbe886ffdbd92dad9fab15ed39503 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.9B7 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.9B7 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.9B7 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=878dad05c8403916eac9c6728ba991e2 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Yuu 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 878dad05c8403916eac9c6728ba991e2 0 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 878dad05c8403916eac9c6728ba991e2 0 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=878dad05c8403916eac9c6728ba991e2 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Yuu 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Yuu 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Yuu 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7f27eadea59c9948ba45950d31aaece40a7fdfb27da97d60e9965cf62867ae1c 00:16:05.888 13:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:05.888 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.KsF 00:16:05.888 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7f27eadea59c9948ba45950d31aaece40a7fdfb27da97d60e9965cf62867ae1c 3 00:16:05.888 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7f27eadea59c9948ba45950d31aaece40a7fdfb27da97d60e9965cf62867ae1c 3 00:16:05.888 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:05.888 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:05.888 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7f27eadea59c9948ba45950d31aaece40a7fdfb27da97d60e9965cf62867ae1c 00:16:05.888 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:16:05.888 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:05.888 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.KsF 00:16:05.888 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.KsF 00:16:05.888 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.KsF 00:16:05.888 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:16:05.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.888 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 77535 00:16:05.888 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 77535 ']' 00:16:05.888 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.888 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:05.888 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.888 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:05.888 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.fjj 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.dMN ]] 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dMN 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.HTY 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.XQj ]] 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.XQj 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.oS3 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.znt ]] 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.znt 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.9B7 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Yuu ]] 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Yuu 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.KsF 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:16:06.148 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:16:06.407 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:06.407 13:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:06.665 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:06.665 Waiting for block devices as requested 00:16:06.665 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:06.924 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:07.496 No valid GPT data, bailing 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:07.496 No valid GPT data, bailing 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:07.496 No valid GPT data, bailing 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:07.496 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:16:07.497 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:16:07.497 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:07.497 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:07.497 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:16:07.497 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:16:07.497 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:07.755 No valid GPT data, bailing 00:16:07.755 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:07.755 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:07.755 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:07.755 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:16:07.755 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:16:07.755 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:07.755 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:07.755 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:07.755 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:16:07.755 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:16:07.755 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:16:07.755 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:16:07.755 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:16:07.755 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:16:07.755 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:16:07.755 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:16:07.755 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:07.755 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 --hostid=0dc0e430-4c84-41eb-b0bf-db443f090fb1 -a 10.0.0.1 -t tcp -s 4420 00:16:07.755 00:16:07.755 Discovery Log Number of Records 2, Generation counter 2 00:16:07.755 =====Discovery Log Entry 0====== 00:16:07.755 trtype: tcp 00:16:07.755 adrfam: ipv4 00:16:07.755 subtype: current discovery subsystem 00:16:07.755 treq: not specified, sq flow control disable supported 00:16:07.755 portid: 1 00:16:07.755 trsvcid: 4420 00:16:07.756 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:07.756 traddr: 10.0.0.1 00:16:07.756 eflags: none 00:16:07.756 sectype: none 00:16:07.756 =====Discovery Log Entry 1====== 00:16:07.756 trtype: tcp 00:16:07.756 adrfam: ipv4 00:16:07.756 subtype: nvme subsystem 00:16:07.756 treq: not specified, sq flow control disable supported 00:16:07.756 portid: 1 00:16:07.756 trsvcid: 4420 00:16:07.756 subnqn: nqn.2024-02.io.spdk:cnode0 00:16:07.756 traddr: 10.0.0.1 00:16:07.756 eflags: none 00:16:07.756 sectype: none 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: ]] 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.756 13:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.015 nvme0n1 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTk3NWY5YWM3NmQzOTkxNTRjNTAzMmVhZjA3NDVjYjOfi/Z8: 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTk3NWY5YWM3NmQzOTkxNTRjNTAzMmVhZjA3NDVjYjOfi/Z8: 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: ]] 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:08.015 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:08.016 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:08.016 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:08.016 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.016 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.016 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.275 nvme0n1 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: ]] 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.275 nvme0n1 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY4ZWVmZTcxYzk1NzY2NGQ3YzBiZWZhNWQxOGIyMmRUXWoD: 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY4ZWVmZTcxYzk1NzY2NGQ3YzBiZWZhNWQxOGIyMmRUXWoD: 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: ]] 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.275 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.534 nvme0n1 00:16:08.534 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.534 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:08.534 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:08.534 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.534 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.534 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.534 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.534 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:08.534 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.534 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.534 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.534 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:08.534 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:16:08.534 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:08.534 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:08.534 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:08.534 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:08.534 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDExMDNmYWE3OWE1ZDU0YzI3ZWZiZTg4NmZmZGJkOTJkYWQ5ZmFiMTVlZDM5NTAzD10wAg==: 00:16:08.534 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: 00:16:08.534 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:08.534 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:08.534 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDExMDNmYWE3OWE1ZDU0YzI3ZWZiZTg4NmZmZGJkOTJkYWQ5ZmFiMTVlZDM5NTAzD10wAg==: 00:16:08.534 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: ]] 00:16:08.534 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: 00:16:08.534 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:16:08.534 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:08.535 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:08.535 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:08.535 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:08.535 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:08.535 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:08.535 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.535 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.535 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.535 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:08.535 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:08.535 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:08.535 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:08.535 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:08.535 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:08.535 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:08.535 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:08.535 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:08.535 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:08.535 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:08.535 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:08.535 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.535 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.793 nvme0n1 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2YyN2VhZGVhNTljOTk0OGJhNDU5NTBkMzFhYWVjZTQwYTdmZGZiMjdkYTk3ZDYwZTk5NjVjZjYyODY3YWUxY2hP9k8=: 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2YyN2VhZGVhNTljOTk0OGJhNDU5NTBkMzFhYWVjZTQwYTdmZGZiMjdkYTk3ZDYwZTk5NjVjZjYyODY3YWUxY2hP9k8=: 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.793 nvme0n1 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTk3NWY5YWM3NmQzOTkxNTRjNTAzMmVhZjA3NDVjYjOfi/Z8: 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:08.793 13:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTk3NWY5YWM3NmQzOTkxNTRjNTAzMmVhZjA3NDVjYjOfi/Z8: 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: ]] 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.416 nvme0n1 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: ]] 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.416 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.675 nvme0n1 00:16:09.675 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.675 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:09.675 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:09.675 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.675 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.675 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.675 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.675 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:09.675 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.675 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.675 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.675 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:09.675 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:16:09.675 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:09.675 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:09.675 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:09.675 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:09.675 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY4ZWVmZTcxYzk1NzY2NGQ3YzBiZWZhNWQxOGIyMmRUXWoD: 00:16:09.675 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: 00:16:09.675 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:09.675 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:09.675 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY4ZWVmZTcxYzk1NzY2NGQ3YzBiZWZhNWQxOGIyMmRUXWoD: 00:16:09.675 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: ]] 00:16:09.675 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: 00:16:09.675 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:16:09.675 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:09.675 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:09.675 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:09.675 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:09.676 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:09.676 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:09.676 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.676 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.676 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.676 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:09.676 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:09.676 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:09.676 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:09.676 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:09.676 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:09.676 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:09.676 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:09.676 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:09.676 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:09.676 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:09.676 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.676 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.676 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.676 nvme0n1 00:16:09.676 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.676 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:09.676 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.676 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.676 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:09.676 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDExMDNmYWE3OWE1ZDU0YzI3ZWZiZTg4NmZmZGJkOTJkYWQ5ZmFiMTVlZDM5NTAzD10wAg==: 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDExMDNmYWE3OWE1ZDU0YzI3ZWZiZTg4NmZmZGJkOTJkYWQ5ZmFiMTVlZDM5NTAzD10wAg==: 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: ]] 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.935 13:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.935 nvme0n1 00:16:09.935 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.935 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:09.935 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:09.935 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.935 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.935 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.935 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.935 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:09.935 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.935 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.935 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.935 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:09.935 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:16:09.935 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2YyN2VhZGVhNTljOTk0OGJhNDU5NTBkMzFhYWVjZTQwYTdmZGZiMjdkYTk3ZDYwZTk5NjVjZjYyODY3YWUxY2hP9k8=: 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2YyN2VhZGVhNTljOTk0OGJhNDU5NTBkMzFhYWVjZTQwYTdmZGZiMjdkYTk3ZDYwZTk5NjVjZjYyODY3YWUxY2hP9k8=: 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.936 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.195 nvme0n1 00:16:10.195 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.195 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:10.195 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.195 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.195 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:10.195 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.195 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.195 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:10.195 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.195 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.195 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.195 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:10.195 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:10.195 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:16:10.195 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:10.195 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:10.195 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:10.195 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:10.195 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTk3NWY5YWM3NmQzOTkxNTRjNTAzMmVhZjA3NDVjYjOfi/Z8: 00:16:10.195 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: 00:16:10.195 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:10.195 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:10.761 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTk3NWY5YWM3NmQzOTkxNTRjNTAzMmVhZjA3NDVjYjOfi/Z8: 00:16:10.761 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: ]] 00:16:10.761 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: 00:16:10.761 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:16:10.761 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:10.761 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:10.761 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:10.761 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:10.761 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:10.761 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:10.761 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.761 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.761 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.761 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:10.761 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:10.761 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:10.761 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:10.761 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:10.761 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:10.761 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:10.761 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:10.761 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:10.761 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:10.761 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:10.761 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.761 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.761 13:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.020 nvme0n1 00:16:11.020 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.020 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:11.020 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:11.020 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.020 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.020 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.020 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.020 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:11.020 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.020 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.020 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.020 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:11.020 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:16:11.020 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: ]] 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.021 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.280 nvme0n1 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY4ZWVmZTcxYzk1NzY2NGQ3YzBiZWZhNWQxOGIyMmRUXWoD: 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY4ZWVmZTcxYzk1NzY2NGQ3YzBiZWZhNWQxOGIyMmRUXWoD: 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: ]] 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.280 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.539 nvme0n1 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDExMDNmYWE3OWE1ZDU0YzI3ZWZiZTg4NmZmZGJkOTJkYWQ5ZmFiMTVlZDM5NTAzD10wAg==: 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDExMDNmYWE3OWE1ZDU0YzI3ZWZiZTg4NmZmZGJkOTJkYWQ5ZmFiMTVlZDM5NTAzD10wAg==: 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: ]] 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.539 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.798 nvme0n1 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2YyN2VhZGVhNTljOTk0OGJhNDU5NTBkMzFhYWVjZTQwYTdmZGZiMjdkYTk3ZDYwZTk5NjVjZjYyODY3YWUxY2hP9k8=: 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2YyN2VhZGVhNTljOTk0OGJhNDU5NTBkMzFhYWVjZTQwYTdmZGZiMjdkYTk3ZDYwZTk5NjVjZjYyODY3YWUxY2hP9k8=: 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.798 13:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.056 nvme0n1 00:16:12.056 13:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.056 13:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:12.056 13:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:12.056 13:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.056 13:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.056 13:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.056 13:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.056 13:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:12.056 13:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.056 13:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.056 13:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.057 13:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:12.057 13:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:12.057 13:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:16:12.057 13:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:12.057 13:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:12.057 13:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:12.057 13:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:12.057 13:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTk3NWY5YWM3NmQzOTkxNTRjNTAzMmVhZjA3NDVjYjOfi/Z8: 00:16:12.057 13:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: 00:16:12.057 13:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:12.057 13:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:13.435 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTk3NWY5YWM3NmQzOTkxNTRjNTAzMmVhZjA3NDVjYjOfi/Z8: 00:16:13.435 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: ]] 00:16:13.435 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: 00:16:13.435 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:16:13.435 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:13.435 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:13.435 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:13.435 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:13.435 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:13.435 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:13.435 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.435 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.435 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.435 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:13.435 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:13.435 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:13.435 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:13.435 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:13.435 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:13.435 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:13.435 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:13.435 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:13.435 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:13.435 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:13.435 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.435 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.435 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.694 nvme0n1 00:16:13.694 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.694 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:13.694 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:13.694 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.694 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: ]] 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.953 13:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.212 nvme0n1 00:16:14.212 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.212 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:14.212 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:14.212 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.212 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.212 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.212 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.212 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:14.212 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.212 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.212 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.212 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:14.212 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:16:14.212 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:14.212 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:14.212 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:14.212 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:14.212 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY4ZWVmZTcxYzk1NzY2NGQ3YzBiZWZhNWQxOGIyMmRUXWoD: 00:16:14.212 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: 00:16:14.212 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:14.212 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:14.212 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY4ZWVmZTcxYzk1NzY2NGQ3YzBiZWZhNWQxOGIyMmRUXWoD: 00:16:14.212 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: ]] 00:16:14.212 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: 00:16:14.212 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:16:14.212 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:14.212 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:14.212 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:14.212 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:14.212 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:14.213 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:14.213 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.213 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.213 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.213 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:14.213 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:14.213 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:14.213 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:14.213 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:14.213 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:14.213 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:14.213 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:14.213 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:14.213 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:14.213 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:14.213 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.213 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.213 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.471 nvme0n1 00:16:14.471 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.471 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:14.471 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:14.471 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.471 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.730 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.730 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.730 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:14.730 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.730 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.730 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.730 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:14.730 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDExMDNmYWE3OWE1ZDU0YzI3ZWZiZTg4NmZmZGJkOTJkYWQ5ZmFiMTVlZDM5NTAzD10wAg==: 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDExMDNmYWE3OWE1ZDU0YzI3ZWZiZTg4NmZmZGJkOTJkYWQ5ZmFiMTVlZDM5NTAzD10wAg==: 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: ]] 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.731 13:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.990 nvme0n1 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2YyN2VhZGVhNTljOTk0OGJhNDU5NTBkMzFhYWVjZTQwYTdmZGZiMjdkYTk3ZDYwZTk5NjVjZjYyODY3YWUxY2hP9k8=: 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2YyN2VhZGVhNTljOTk0OGJhNDU5NTBkMzFhYWVjZTQwYTdmZGZiMjdkYTk3ZDYwZTk5NjVjZjYyODY3YWUxY2hP9k8=: 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.990 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.249 nvme0n1 00:16:15.249 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.250 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:15.250 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.250 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:15.250 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTk3NWY5YWM3NmQzOTkxNTRjNTAzMmVhZjA3NDVjYjOfi/Z8: 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTk3NWY5YWM3NmQzOTkxNTRjNTAzMmVhZjA3NDVjYjOfi/Z8: 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: ]] 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.508 13:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.076 nvme0n1 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: ]] 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.076 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.644 nvme0n1 00:16:16.644 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.644 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:16.644 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:16.644 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.644 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.644 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.644 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.644 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:16.644 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.644 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.644 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.644 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:16.644 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:16:16.644 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:16.644 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:16.644 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:16.644 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:16.644 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY4ZWVmZTcxYzk1NzY2NGQ3YzBiZWZhNWQxOGIyMmRUXWoD: 00:16:16.644 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: 00:16:16.644 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:16.644 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:16.644 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY4ZWVmZTcxYzk1NzY2NGQ3YzBiZWZhNWQxOGIyMmRUXWoD: 00:16:16.644 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: ]] 00:16:16.644 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: 00:16:16.644 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:16:16.644 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:16.644 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:16.644 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:16.645 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:16.645 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:16.645 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:16.645 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.645 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.645 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.645 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:16.645 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:16.645 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:16.645 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:16.645 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:16.645 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:16.645 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:16.645 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:16.645 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:16.645 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:16.645 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:16.645 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.645 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.645 13:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.211 nvme0n1 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDExMDNmYWE3OWE1ZDU0YzI3ZWZiZTg4NmZmZGJkOTJkYWQ5ZmFiMTVlZDM5NTAzD10wAg==: 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDExMDNmYWE3OWE1ZDU0YzI3ZWZiZTg4NmZmZGJkOTJkYWQ5ZmFiMTVlZDM5NTAzD10wAg==: 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: ]] 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:17.211 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:17.212 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:17.212 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:17.212 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:17.212 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:17.212 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:17.212 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:17.212 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:17.212 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:17.212 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:17.212 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.212 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.778 nvme0n1 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2YyN2VhZGVhNTljOTk0OGJhNDU5NTBkMzFhYWVjZTQwYTdmZGZiMjdkYTk3ZDYwZTk5NjVjZjYyODY3YWUxY2hP9k8=: 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2YyN2VhZGVhNTljOTk0OGJhNDU5NTBkMzFhYWVjZTQwYTdmZGZiMjdkYTk3ZDYwZTk5NjVjZjYyODY3YWUxY2hP9k8=: 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:17.778 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:17.779 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:17.779 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:17.779 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.779 13:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.346 nvme0n1 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTk3NWY5YWM3NmQzOTkxNTRjNTAzMmVhZjA3NDVjYjOfi/Z8: 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTk3NWY5YWM3NmQzOTkxNTRjNTAzMmVhZjA3NDVjYjOfi/Z8: 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: ]] 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.346 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.605 nvme0n1 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: ]] 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.605 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.606 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.606 nvme0n1 00:16:18.606 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.606 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:18.606 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.606 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:18.606 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.606 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY4ZWVmZTcxYzk1NzY2NGQ3YzBiZWZhNWQxOGIyMmRUXWoD: 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY4ZWVmZTcxYzk1NzY2NGQ3YzBiZWZhNWQxOGIyMmRUXWoD: 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: ]] 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.865 nvme0n1 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.865 13:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.865 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.865 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:18.865 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:16:18.865 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:18.865 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:18.865 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:18.865 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:18.865 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDExMDNmYWE3OWE1ZDU0YzI3ZWZiZTg4NmZmZGJkOTJkYWQ5ZmFiMTVlZDM5NTAzD10wAg==: 00:16:18.865 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: 00:16:18.865 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:18.865 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:18.865 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDExMDNmYWE3OWE1ZDU0YzI3ZWZiZTg4NmZmZGJkOTJkYWQ5ZmFiMTVlZDM5NTAzD10wAg==: 00:16:18.865 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: ]] 00:16:18.865 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: 00:16:18.865 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:16:18.865 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:18.865 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:18.865 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:18.865 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:18.865 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:18.865 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:18.865 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.866 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.866 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.866 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:18.866 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:18.866 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:18.866 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:18.866 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:18.866 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:18.866 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:18.866 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:18.866 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:18.866 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:18.866 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:18.866 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:18.866 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.866 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.125 nvme0n1 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2YyN2VhZGVhNTljOTk0OGJhNDU5NTBkMzFhYWVjZTQwYTdmZGZiMjdkYTk3ZDYwZTk5NjVjZjYyODY3YWUxY2hP9k8=: 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2YyN2VhZGVhNTljOTk0OGJhNDU5NTBkMzFhYWVjZTQwYTdmZGZiMjdkYTk3ZDYwZTk5NjVjZjYyODY3YWUxY2hP9k8=: 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.125 nvme0n1 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.125 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTk3NWY5YWM3NmQzOTkxNTRjNTAzMmVhZjA3NDVjYjOfi/Z8: 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTk3NWY5YWM3NmQzOTkxNTRjNTAzMmVhZjA3NDVjYjOfi/Z8: 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: ]] 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:19.384 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.385 nvme0n1 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: ]] 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:19.385 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.644 nvme0n1 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY4ZWVmZTcxYzk1NzY2NGQ3YzBiZWZhNWQxOGIyMmRUXWoD: 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY4ZWVmZTcxYzk1NzY2NGQ3YzBiZWZhNWQxOGIyMmRUXWoD: 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: ]] 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.644 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.903 nvme0n1 00:16:19.903 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.903 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:19.903 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.903 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.903 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:19.903 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.903 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.903 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:19.903 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.903 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.903 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.903 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:19.903 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:16:19.903 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:19.903 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:19.903 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:19.903 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:19.904 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDExMDNmYWE3OWE1ZDU0YzI3ZWZiZTg4NmZmZGJkOTJkYWQ5ZmFiMTVlZDM5NTAzD10wAg==: 00:16:19.904 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: 00:16:19.904 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:19.904 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:19.904 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDExMDNmYWE3OWE1ZDU0YzI3ZWZiZTg4NmZmZGJkOTJkYWQ5ZmFiMTVlZDM5NTAzD10wAg==: 00:16:19.904 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: ]] 00:16:19.904 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: 00:16:19.904 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:16:19.904 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:19.904 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:19.904 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:19.904 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:19.904 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:19.904 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:19.904 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.904 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.904 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.904 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:19.904 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:19.904 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:19.904 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:19.904 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:19.904 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:19.904 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:19.904 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:19.904 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:19.904 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:19.904 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:19.904 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:19.904 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.904 13:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.162 nvme0n1 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2YyN2VhZGVhNTljOTk0OGJhNDU5NTBkMzFhYWVjZTQwYTdmZGZiMjdkYTk3ZDYwZTk5NjVjZjYyODY3YWUxY2hP9k8=: 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2YyN2VhZGVhNTljOTk0OGJhNDU5NTBkMzFhYWVjZTQwYTdmZGZiMjdkYTk3ZDYwZTk5NjVjZjYyODY3YWUxY2hP9k8=: 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.162 nvme0n1 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.162 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTk3NWY5YWM3NmQzOTkxNTRjNTAzMmVhZjA3NDVjYjOfi/Z8: 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTk3NWY5YWM3NmQzOTkxNTRjNTAzMmVhZjA3NDVjYjOfi/Z8: 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: ]] 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.420 nvme0n1 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.420 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: ]] 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.679 nvme0n1 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY4ZWVmZTcxYzk1NzY2NGQ3YzBiZWZhNWQxOGIyMmRUXWoD: 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY4ZWVmZTcxYzk1NzY2NGQ3YzBiZWZhNWQxOGIyMmRUXWoD: 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: ]] 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.679 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.937 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.937 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:20.937 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:20.937 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:20.937 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:20.937 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:20.937 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:20.937 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:20.937 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:20.937 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:20.937 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:20.937 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:20.937 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.937 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.937 13:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.937 nvme0n1 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDExMDNmYWE3OWE1ZDU0YzI3ZWZiZTg4NmZmZGJkOTJkYWQ5ZmFiMTVlZDM5NTAzD10wAg==: 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDExMDNmYWE3OWE1ZDU0YzI3ZWZiZTg4NmZmZGJkOTJkYWQ5ZmFiMTVlZDM5NTAzD10wAg==: 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: ]] 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.937 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.195 nvme0n1 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2YyN2VhZGVhNTljOTk0OGJhNDU5NTBkMzFhYWVjZTQwYTdmZGZiMjdkYTk3ZDYwZTk5NjVjZjYyODY3YWUxY2hP9k8=: 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2YyN2VhZGVhNTljOTk0OGJhNDU5NTBkMzFhYWVjZTQwYTdmZGZiMjdkYTk3ZDYwZTk5NjVjZjYyODY3YWUxY2hP9k8=: 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.195 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.452 nvme0n1 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTk3NWY5YWM3NmQzOTkxNTRjNTAzMmVhZjA3NDVjYjOfi/Z8: 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTk3NWY5YWM3NmQzOTkxNTRjNTAzMmVhZjA3NDVjYjOfi/Z8: 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: ]] 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.452 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.711 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.711 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:21.711 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:21.711 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:21.711 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:21.711 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:21.711 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:21.711 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:21.711 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:21.711 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:21.711 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:21.711 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:21.711 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.711 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.711 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.970 nvme0n1 00:16:21.970 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.970 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:21.970 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.970 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:21.970 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.970 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.970 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.970 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:21.970 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.970 13:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: ]] 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.970 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.230 nvme0n1 00:16:22.230 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.230 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:22.230 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.230 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.230 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:22.230 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.230 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.230 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:22.230 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.230 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.230 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.230 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:22.230 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:16:22.230 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:22.230 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:22.230 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:22.230 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:22.230 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY4ZWVmZTcxYzk1NzY2NGQ3YzBiZWZhNWQxOGIyMmRUXWoD: 00:16:22.230 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: 00:16:22.230 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:22.230 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:22.231 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY4ZWVmZTcxYzk1NzY2NGQ3YzBiZWZhNWQxOGIyMmRUXWoD: 00:16:22.231 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: ]] 00:16:22.231 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: 00:16:22.231 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:16:22.231 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:22.231 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:22.231 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:22.231 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:22.231 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:22.231 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:22.231 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.231 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.231 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.231 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:22.231 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:22.231 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:22.231 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:22.231 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:22.231 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:22.231 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:22.231 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:22.231 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:22.231 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:22.231 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:22.231 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.231 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.231 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.827 nvme0n1 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDExMDNmYWE3OWE1ZDU0YzI3ZWZiZTg4NmZmZGJkOTJkYWQ5ZmFiMTVlZDM5NTAzD10wAg==: 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDExMDNmYWE3OWE1ZDU0YzI3ZWZiZTg4NmZmZGJkOTJkYWQ5ZmFiMTVlZDM5NTAzD10wAg==: 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: ]] 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.827 13:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.086 nvme0n1 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2YyN2VhZGVhNTljOTk0OGJhNDU5NTBkMzFhYWVjZTQwYTdmZGZiMjdkYTk3ZDYwZTk5NjVjZjYyODY3YWUxY2hP9k8=: 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2YyN2VhZGVhNTljOTk0OGJhNDU5NTBkMzFhYWVjZTQwYTdmZGZiMjdkYTk3ZDYwZTk5NjVjZjYyODY3YWUxY2hP9k8=: 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.086 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.345 nvme0n1 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTk3NWY5YWM3NmQzOTkxNTRjNTAzMmVhZjA3NDVjYjOfi/Z8: 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTk3NWY5YWM3NmQzOTkxNTRjNTAzMmVhZjA3NDVjYjOfi/Z8: 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: ]] 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.345 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.604 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:23.604 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:23.604 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:23.604 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:23.604 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:23.604 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:23.604 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:23.604 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:23.604 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:23.604 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:23.604 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:23.604 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.604 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.604 13:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.863 nvme0n1 00:16:23.863 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.863 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:23.863 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.863 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:23.863 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: ]] 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.122 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.690 nvme0n1 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY4ZWVmZTcxYzk1NzY2NGQ3YzBiZWZhNWQxOGIyMmRUXWoD: 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY4ZWVmZTcxYzk1NzY2NGQ3YzBiZWZhNWQxOGIyMmRUXWoD: 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: ]] 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.690 13:12:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.257 nvme0n1 00:16:25.257 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDExMDNmYWE3OWE1ZDU0YzI3ZWZiZTg4NmZmZGJkOTJkYWQ5ZmFiMTVlZDM5NTAzD10wAg==: 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDExMDNmYWE3OWE1ZDU0YzI3ZWZiZTg4NmZmZGJkOTJkYWQ5ZmFiMTVlZDM5NTAzD10wAg==: 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: ]] 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.258 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.825 nvme0n1 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2YyN2VhZGVhNTljOTk0OGJhNDU5NTBkMzFhYWVjZTQwYTdmZGZiMjdkYTk3ZDYwZTk5NjVjZjYyODY3YWUxY2hP9k8=: 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2YyN2VhZGVhNTljOTk0OGJhNDU5NTBkMzFhYWVjZTQwYTdmZGZiMjdkYTk3ZDYwZTk5NjVjZjYyODY3YWUxY2hP9k8=: 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.826 13:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.393 nvme0n1 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTk3NWY5YWM3NmQzOTkxNTRjNTAzMmVhZjA3NDVjYjOfi/Z8: 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTk3NWY5YWM3NmQzOTkxNTRjNTAzMmVhZjA3NDVjYjOfi/Z8: 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: ]] 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.393 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:26.394 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:26.394 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:26.394 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:26.394 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:26.394 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:26.394 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:26.394 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:26.394 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:26.394 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:26.394 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:26.394 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.394 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.394 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.394 nvme0n1 00:16:26.394 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.394 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:26.394 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:26.394 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.394 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.394 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.653 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.653 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:26.653 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.653 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.653 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.653 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:26.653 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:16:26.653 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:26.653 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:26.653 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:26.653 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:26.653 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:26.653 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:26.653 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:26.653 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: ]] 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.654 nvme0n1 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY4ZWVmZTcxYzk1NzY2NGQ3YzBiZWZhNWQxOGIyMmRUXWoD: 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY4ZWVmZTcxYzk1NzY2NGQ3YzBiZWZhNWQxOGIyMmRUXWoD: 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: ]] 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.654 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.913 nvme0n1 00:16:26.913 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.913 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:26.913 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.913 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:26.913 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.913 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.913 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.913 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:26.913 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.913 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.913 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.913 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:26.913 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:16:26.913 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:26.913 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:26.913 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:26.913 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:26.913 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDExMDNmYWE3OWE1ZDU0YzI3ZWZiZTg4NmZmZGJkOTJkYWQ5ZmFiMTVlZDM5NTAzD10wAg==: 00:16:26.913 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: 00:16:26.913 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:26.913 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:26.913 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDExMDNmYWE3OWE1ZDU0YzI3ZWZiZTg4NmZmZGJkOTJkYWQ5ZmFiMTVlZDM5NTAzD10wAg==: 00:16:26.913 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: ]] 00:16:26.913 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: 00:16:26.913 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:16:26.913 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:26.914 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:26.914 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:26.914 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:26.914 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:26.914 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:26.914 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.914 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.914 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.914 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:26.914 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:26.914 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:26.914 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:26.914 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:26.914 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:26.914 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:26.914 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:26.914 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:26.914 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:26.914 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:26.914 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:26.914 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.914 13:12:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.914 nvme0n1 00:16:26.914 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.914 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:26.914 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.914 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.914 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:26.914 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.173 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.173 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:27.173 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.173 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.173 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.173 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:27.173 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:16:27.173 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:27.173 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:27.173 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:27.173 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:27.173 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2YyN2VhZGVhNTljOTk0OGJhNDU5NTBkMzFhYWVjZTQwYTdmZGZiMjdkYTk3ZDYwZTk5NjVjZjYyODY3YWUxY2hP9k8=: 00:16:27.173 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:27.173 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:27.173 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:27.173 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2YyN2VhZGVhNTljOTk0OGJhNDU5NTBkMzFhYWVjZTQwYTdmZGZiMjdkYTk3ZDYwZTk5NjVjZjYyODY3YWUxY2hP9k8=: 00:16:27.173 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.174 nvme0n1 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTk3NWY5YWM3NmQzOTkxNTRjNTAzMmVhZjA3NDVjYjOfi/Z8: 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTk3NWY5YWM3NmQzOTkxNTRjNTAzMmVhZjA3NDVjYjOfi/Z8: 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: ]] 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.174 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.433 nvme0n1 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: ]] 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.433 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.434 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.434 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:27.434 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:27.434 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:27.434 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:27.434 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:27.434 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:27.434 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:27.434 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:27.434 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:27.434 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:27.434 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:27.434 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.434 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.434 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.693 nvme0n1 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY4ZWVmZTcxYzk1NzY2NGQ3YzBiZWZhNWQxOGIyMmRUXWoD: 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY4ZWVmZTcxYzk1NzY2NGQ3YzBiZWZhNWQxOGIyMmRUXWoD: 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: ]] 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.693 nvme0n1 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.693 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDExMDNmYWE3OWE1ZDU0YzI3ZWZiZTg4NmZmZGJkOTJkYWQ5ZmFiMTVlZDM5NTAzD10wAg==: 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDExMDNmYWE3OWE1ZDU0YzI3ZWZiZTg4NmZmZGJkOTJkYWQ5ZmFiMTVlZDM5NTAzD10wAg==: 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: ]] 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.952 13:12:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.952 nvme0n1 00:16:27.952 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.952 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:27.952 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:27.952 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.952 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.952 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.952 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.952 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:27.952 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.952 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.211 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.211 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:28.211 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:16:28.211 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:28.211 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:28.211 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:28.211 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:28.211 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2YyN2VhZGVhNTljOTk0OGJhNDU5NTBkMzFhYWVjZTQwYTdmZGZiMjdkYTk3ZDYwZTk5NjVjZjYyODY3YWUxY2hP9k8=: 00:16:28.211 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:28.211 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2YyN2VhZGVhNTljOTk0OGJhNDU5NTBkMzFhYWVjZTQwYTdmZGZiMjdkYTk3ZDYwZTk5NjVjZjYyODY3YWUxY2hP9k8=: 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.212 nvme0n1 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTk3NWY5YWM3NmQzOTkxNTRjNTAzMmVhZjA3NDVjYjOfi/Z8: 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTk3NWY5YWM3NmQzOTkxNTRjNTAzMmVhZjA3NDVjYjOfi/Z8: 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: ]] 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.212 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.471 nvme0n1 00:16:28.471 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.471 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:28.471 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:28.471 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.471 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.471 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: ]] 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.472 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.731 nvme0n1 00:16:28.731 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.731 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:28.731 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:28.731 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.731 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.731 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.731 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.731 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:28.731 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.731 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.731 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.731 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:28.731 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:16:28.731 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:28.731 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:28.731 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:28.732 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:28.732 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY4ZWVmZTcxYzk1NzY2NGQ3YzBiZWZhNWQxOGIyMmRUXWoD: 00:16:28.732 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: 00:16:28.732 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:28.732 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:28.732 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY4ZWVmZTcxYzk1NzY2NGQ3YzBiZWZhNWQxOGIyMmRUXWoD: 00:16:28.732 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: ]] 00:16:28.732 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: 00:16:28.732 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:16:28.732 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:28.732 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:28.732 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:28.732 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:28.732 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:28.732 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:28.732 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.732 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.732 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.732 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:28.732 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:28.732 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:28.732 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:28.732 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:28.732 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:28.732 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:28.732 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:28.732 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:28.732 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:28.732 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:28.732 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.732 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.732 13:12:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.991 nvme0n1 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDExMDNmYWE3OWE1ZDU0YzI3ZWZiZTg4NmZmZGJkOTJkYWQ5ZmFiMTVlZDM5NTAzD10wAg==: 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDExMDNmYWE3OWE1ZDU0YzI3ZWZiZTg4NmZmZGJkOTJkYWQ5ZmFiMTVlZDM5NTAzD10wAg==: 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: ]] 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:28.991 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:28.992 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:28.992 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:28.992 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:28.992 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:28.992 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:28.992 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.992 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.251 nvme0n1 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2YyN2VhZGVhNTljOTk0OGJhNDU5NTBkMzFhYWVjZTQwYTdmZGZiMjdkYTk3ZDYwZTk5NjVjZjYyODY3YWUxY2hP9k8=: 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2YyN2VhZGVhNTljOTk0OGJhNDU5NTBkMzFhYWVjZTQwYTdmZGZiMjdkYTk3ZDYwZTk5NjVjZjYyODY3YWUxY2hP9k8=: 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:29.251 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:29.252 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.252 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.511 nvme0n1 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTk3NWY5YWM3NmQzOTkxNTRjNTAzMmVhZjA3NDVjYjOfi/Z8: 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTk3NWY5YWM3NmQzOTkxNTRjNTAzMmVhZjA3NDVjYjOfi/Z8: 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: ]] 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.511 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.770 nvme0n1 00:16:29.770 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.770 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:29.770 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:29.770 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.770 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.029 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.029 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.029 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:30.029 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.029 13:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.029 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.029 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:30.029 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:16:30.029 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:30.029 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:30.029 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:30.029 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:30.029 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:30.029 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:30.029 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:30.029 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:30.029 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:30.029 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: ]] 00:16:30.029 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:30.029 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:16:30.029 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:30.029 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:30.029 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:30.029 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:30.029 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:30.029 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:30.029 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.029 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.029 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.029 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:30.029 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:30.029 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:30.029 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:30.029 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:30.029 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:30.029 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:30.030 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:30.030 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:30.030 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:30.030 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:30.030 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.030 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.030 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.288 nvme0n1 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY4ZWVmZTcxYzk1NzY2NGQ3YzBiZWZhNWQxOGIyMmRUXWoD: 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY4ZWVmZTcxYzk1NzY2NGQ3YzBiZWZhNWQxOGIyMmRUXWoD: 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: ]] 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.289 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.547 nvme0n1 00:16:30.547 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.547 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:30.547 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.547 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.547 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:30.547 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.806 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.806 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:30.806 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.806 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.806 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.806 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:30.806 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:16:30.806 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:30.806 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:30.806 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:30.806 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:30.806 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDExMDNmYWE3OWE1ZDU0YzI3ZWZiZTg4NmZmZGJkOTJkYWQ5ZmFiMTVlZDM5NTAzD10wAg==: 00:16:30.806 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: 00:16:30.806 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:30.806 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:30.806 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDExMDNmYWE3OWE1ZDU0YzI3ZWZiZTg4NmZmZGJkOTJkYWQ5ZmFiMTVlZDM5NTAzD10wAg==: 00:16:30.806 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: ]] 00:16:30.807 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: 00:16:30.807 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:16:30.807 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:30.807 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:30.807 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:30.807 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:30.807 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:30.807 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:30.807 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.807 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.807 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.807 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:30.807 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:30.807 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:30.807 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:30.807 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:30.807 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:30.807 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:30.807 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:30.807 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:30.807 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:30.807 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:30.807 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:30.807 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.807 13:12:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.065 nvme0n1 00:16:31.065 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.065 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:31.065 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.065 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:31.065 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.065 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.065 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.065 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:31.065 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.065 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.065 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.065 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:31.065 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:16:31.065 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:31.065 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:31.065 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:31.066 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:31.066 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2YyN2VhZGVhNTljOTk0OGJhNDU5NTBkMzFhYWVjZTQwYTdmZGZiMjdkYTk3ZDYwZTk5NjVjZjYyODY3YWUxY2hP9k8=: 00:16:31.066 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:31.066 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:31.066 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:31.066 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2YyN2VhZGVhNTljOTk0OGJhNDU5NTBkMzFhYWVjZTQwYTdmZGZiMjdkYTk3ZDYwZTk5NjVjZjYyODY3YWUxY2hP9k8=: 00:16:31.066 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:31.066 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:16:31.066 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:31.066 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:31.066 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:31.066 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:31.066 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:31.066 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:31.066 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.066 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.066 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.066 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:31.066 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:31.066 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:31.066 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:31.066 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:31.066 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:31.066 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:31.066 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:31.066 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:31.066 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:31.066 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:31.066 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:31.066 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.066 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.382 nvme0n1 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTk3NWY5YWM3NmQzOTkxNTRjNTAzMmVhZjA3NDVjYjOfi/Z8: 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTk3NWY5YWM3NmQzOTkxNTRjNTAzMmVhZjA3NDVjYjOfi/Z8: 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: ]] 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTRkYWY3ZWU1ZWNmMWRiYzQwODdhMjA3NTgzYjljMDYzY2JlODM0YjFkNjY0NzFhNzgxY2QyN2UyNGQxZTE3MFdgSuQ=: 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:31.382 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:31.383 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:31.383 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:31.383 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:31.383 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:31.383 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:31.383 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:31.383 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:31.383 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.383 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.383 13:12:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.948 nvme0n1 00:16:31.948 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.948 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:31.948 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:31.948 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.948 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.948 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.948 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.948 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:31.948 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.948 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.948 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.948 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:31.948 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:16:31.948 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:31.948 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:31.948 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:31.948 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:31.948 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:31.948 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:31.948 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:31.948 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:31.948 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:31.948 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: ]] 00:16:31.948 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:31.948 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:16:31.948 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:31.948 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:31.948 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:31.948 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:31.948 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:31.948 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:31.948 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.949 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.949 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.949 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:31.949 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:31.949 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:31.949 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:31.949 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:31.949 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:31.949 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:31.949 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:31.949 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:31.949 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:31.949 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:31.949 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.949 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.949 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.514 nvme0n1 00:16:32.514 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.514 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:32.514 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.514 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:32.514 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.514 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.514 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.514 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:32.514 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.514 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.514 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.514 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:32.514 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:16:32.514 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:32.514 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:32.514 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:32.514 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:32.514 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWY4ZWVmZTcxYzk1NzY2NGQ3YzBiZWZhNWQxOGIyMmRUXWoD: 00:16:32.514 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: 00:16:32.514 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:32.514 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:32.514 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWY4ZWVmZTcxYzk1NzY2NGQ3YzBiZWZhNWQxOGIyMmRUXWoD: 00:16:32.514 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: ]] 00:16:32.514 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmMwMTA0NDY2NGJiMDgzOWI0OTViZTZiNzU0NTk0MGFRDSzv: 00:16:32.514 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:16:32.514 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:32.514 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:32.514 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:32.514 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:32.514 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:32.515 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:32.515 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.515 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.515 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.515 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:32.515 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:32.515 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:32.515 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:32.515 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:32.515 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:32.515 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:32.515 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:32.515 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:32.515 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:32.515 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:32.515 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.515 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.515 13:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.081 nvme0n1 00:16:33.081 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.081 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:33.081 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:33.081 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.081 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.081 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.081 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.081 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:33.081 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.081 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDExMDNmYWE3OWE1ZDU0YzI3ZWZiZTg4NmZmZGJkOTJkYWQ5ZmFiMTVlZDM5NTAzD10wAg==: 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDExMDNmYWE3OWE1ZDU0YzI3ZWZiZTg4NmZmZGJkOTJkYWQ5ZmFiMTVlZDM5NTAzD10wAg==: 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: ]] 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODc4ZGFkMDVjODQwMzkxNmVhYzljNjcyOGJhOTkxZTJ6DLrq: 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.339 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.906 nvme0n1 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2YyN2VhZGVhNTljOTk0OGJhNDU5NTBkMzFhYWVjZTQwYTdmZGZiMjdkYTk3ZDYwZTk5NjVjZjYyODY3YWUxY2hP9k8=: 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2YyN2VhZGVhNTljOTk0OGJhNDU5NTBkMzFhYWVjZTQwYTdmZGZiMjdkYTk3ZDYwZTk5NjVjZjYyODY3YWUxY2hP9k8=: 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.906 13:12:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.474 nvme0n1 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2NkMTYxZjZhMTNmMDYxNGQwMDc0MzFjNmYxNjdmNDA0NjlhZmQwNTY1MjdlNjIzfy4YcQ==: 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: ]] 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzI1OWVkMTMzYjRhZWI1MWU1MWI0Njg0Y2UzMmZhZDdkMzAwMWFlZTNmMmMzNmIyWpbWNw==: 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.474 request: 00:16:34.474 { 00:16:34.474 "name": "nvme0", 00:16:34.474 "trtype": "tcp", 00:16:34.474 "traddr": "10.0.0.1", 00:16:34.474 "adrfam": "ipv4", 00:16:34.474 "trsvcid": "4420", 00:16:34.474 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:16:34.474 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:34.474 "prchk_reftag": false, 00:16:34.474 "prchk_guard": false, 00:16:34.474 "hdgst": false, 00:16:34.474 "ddgst": false, 00:16:34.474 "method": "bdev_nvme_attach_controller", 00:16:34.474 "req_id": 1 00:16:34.474 } 00:16:34.474 Got JSON-RPC error response 00:16:34.474 response: 00:16:34.474 { 00:16:34.474 "code": -5, 00:16:34.474 "message": "Input/output error" 00:16:34.474 } 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.474 request: 00:16:34.474 { 00:16:34.474 "name": "nvme0", 00:16:34.474 "trtype": "tcp", 00:16:34.474 "traddr": "10.0.0.1", 00:16:34.474 "adrfam": "ipv4", 00:16:34.474 "trsvcid": "4420", 00:16:34.474 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:16:34.474 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:34.474 "prchk_reftag": false, 00:16:34.474 "prchk_guard": false, 00:16:34.474 "hdgst": false, 00:16:34.474 "ddgst": false, 00:16:34.474 "dhchap_key": "key2", 00:16:34.474 "method": "bdev_nvme_attach_controller", 00:16:34.474 "req_id": 1 00:16:34.474 } 00:16:34.474 Got JSON-RPC error response 00:16:34.474 response: 00:16:34.474 { 00:16:34.474 "code": -5, 00:16:34.474 "message": "Input/output error" 00:16:34.474 } 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:34.474 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:34.475 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:16:34.475 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:16:34.475 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.475 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.475 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.733 request: 00:16:34.733 { 00:16:34.733 "name": "nvme0", 00:16:34.733 "trtype": "tcp", 00:16:34.733 "traddr": "10.0.0.1", 00:16:34.733 "adrfam": "ipv4", 00:16:34.733 "trsvcid": "4420", 00:16:34.733 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:16:34.733 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:34.733 "prchk_reftag": false, 00:16:34.733 "prchk_guard": false, 00:16:34.733 "hdgst": false, 00:16:34.733 "ddgst": false, 00:16:34.733 "dhchap_key": "key1", 00:16:34.733 "dhchap_ctrlr_key": "ckey2", 00:16:34.733 "method": "bdev_nvme_attach_controller", 00:16:34.733 "req_id": 1 00:16:34.733 } 00:16:34.733 Got JSON-RPC error response 00:16:34.733 response: 00:16:34.733 { 00:16:34.733 "code": -5, 00:16:34.733 "message": "Input/output error" 00:16:34.733 } 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:34.733 rmmod nvme_tcp 00:16:34.733 rmmod nvme_fabrics 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:16:34.733 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 77535 ']' 00:16:34.734 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 77535 00:16:34.734 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 77535 ']' 00:16:34.734 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 77535 00:16:34.734 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:16:34.734 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:34.734 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77535 00:16:34.734 killing process with pid 77535 00:16:34.734 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:34.734 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:34.734 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77535' 00:16:34.734 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 77535 00:16:34.734 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 77535 00:16:34.992 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:34.992 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:34.992 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:34.992 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:34.992 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:34.992 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.992 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:34.992 13:12:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.992 13:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:34.992 13:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:16:34.992 13:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:34.992 13:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:16:34.992 13:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:16:34.992 13:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:16:34.992 13:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:34.992 13:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:34.992 13:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:34.992 13:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:34.992 13:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:16:34.992 13:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:16:34.992 13:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:35.556 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:35.814 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:35.814 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:35.814 13:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.fjj /tmp/spdk.key-null.HTY /tmp/spdk.key-sha256.oS3 /tmp/spdk.key-sha384.9B7 /tmp/spdk.key-sha512.KsF /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:16:35.814 13:12:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:36.072 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:36.072 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:36.072 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:36.330 ************************************ 00:16:36.330 END TEST nvmf_auth_host 00:16:36.330 ************************************ 00:16:36.330 00:16:36.330 real 0m32.364s 00:16:36.330 user 0m30.258s 00:16:36.330 sys 0m3.504s 00:16:36.330 13:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:36.330 13:12:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.330 13:12:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:16:36.330 13:12:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:36.330 13:12:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:36.330 13:12:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:36.330 13:12:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.330 ************************************ 00:16:36.330 START TEST nvmf_digest 00:16:36.330 ************************************ 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:36.331 * Looking for test storage... 00:16:36.331 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:36.331 Cannot find device "nvmf_tgt_br" 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # true 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:36.331 Cannot find device "nvmf_tgt_br2" 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # true 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:36.331 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:36.589 Cannot find device "nvmf_tgt_br" 00:16:36.589 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # true 00:16:36.589 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:36.589 Cannot find device "nvmf_tgt_br2" 00:16:36.589 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # true 00:16:36.589 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:36.589 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:36.589 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:36.589 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:36.589 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:16:36.589 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:36.589 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:36.589 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:16:36.589 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:36.589 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:36.589 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:36.589 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:36.589 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:36.589 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:36.589 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:36.589 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:36.589 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:36.589 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:36.589 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:36.589 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:36.589 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:36.589 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:36.589 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:36.589 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:36.589 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:36.589 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:36.589 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:36.589 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:36.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:36.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:16:36.848 00:16:36.848 --- 10.0.0.2 ping statistics --- 00:16:36.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.848 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:36.848 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:36.848 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:16:36.848 00:16:36.848 --- 10.0.0.3 ping statistics --- 00:16:36.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.848 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:36.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:36.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:36.848 00:16:36.848 --- 10.0.0.1 ping statistics --- 00:16:36.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.848 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:16:36.848 ************************************ 00:16:36.848 START TEST nvmf_digest_clean 00:16:36.848 ************************************ 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=79089 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 79089 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79089 ']' 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:36.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:36.848 13:12:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:36.848 [2024-07-25 13:12:28.919818] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:36.848 [2024-07-25 13:12:28.919972] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:37.106 [2024-07-25 13:12:29.065552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.106 [2024-07-25 13:12:29.231239] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:37.106 [2024-07-25 13:12:29.231369] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:37.106 [2024-07-25 13:12:29.231384] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:37.106 [2024-07-25 13:12:29.231395] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:37.106 [2024-07-25 13:12:29.231404] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:37.106 [2024-07-25 13:12:29.231464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.040 13:12:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:38.040 13:12:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:16:38.040 13:12:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:38.040 13:12:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:38.040 13:12:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:38.040 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:38.040 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:16:38.040 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:16:38.040 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:16:38.040 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.040 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:38.040 [2024-07-25 13:12:30.070434] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:38.040 null0 00:16:38.040 [2024-07-25 13:12:30.116618] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:38.040 [2024-07-25 13:12:30.140751] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:38.040 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.040 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:16:38.040 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:38.040 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:38.040 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:16:38.040 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:16:38.040 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:16:38.040 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:38.040 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79121 00:16:38.040 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:38.040 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79121 /var/tmp/bperf.sock 00:16:38.040 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79121 ']' 00:16:38.040 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:38.040 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:38.040 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:38.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:38.040 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:38.040 13:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:38.040 [2024-07-25 13:12:30.203432] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:38.040 [2024-07-25 13:12:30.203787] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79121 ] 00:16:38.308 [2024-07-25 13:12:30.345163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.308 [2024-07-25 13:12:30.462914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.257 13:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:39.257 13:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:16:39.257 13:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:39.257 13:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:39.257 13:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:39.257 [2024-07-25 13:12:31.398506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:39.515 13:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:39.515 13:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:39.515 nvme0n1 00:16:39.774 13:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:39.774 13:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:39.774 Running I/O for 2 seconds... 00:16:41.674 00:16:41.675 Latency(us) 00:16:41.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:41.675 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:41.675 nvme0n1 : 2.01 16063.36 62.75 0.00 0.00 7962.76 6672.76 17635.14 00:16:41.675 =================================================================================================================== 00:16:41.675 Total : 16063.36 62.75 0.00 0.00 7962.76 6672.76 17635.14 00:16:41.675 0 00:16:41.675 13:12:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:41.675 13:12:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:41.675 13:12:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:41.675 13:12:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:41.675 13:12:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:41.675 | select(.opcode=="crc32c") 00:16:41.675 | "\(.module_name) \(.executed)"' 00:16:41.933 13:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:41.933 13:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:41.933 13:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:41.933 13:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:41.933 13:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79121 00:16:41.933 13:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79121 ']' 00:16:41.933 13:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79121 00:16:41.933 13:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:16:41.933 13:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:41.933 13:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79121 00:16:42.191 killing process with pid 79121 00:16:42.191 Received shutdown signal, test time was about 2.000000 seconds 00:16:42.191 00:16:42.191 Latency(us) 00:16:42.191 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.191 =================================================================================================================== 00:16:42.191 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:42.191 13:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:42.191 13:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:42.191 13:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79121' 00:16:42.191 13:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79121 00:16:42.191 13:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79121 00:16:42.191 13:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:16:42.191 13:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:42.191 13:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:42.191 13:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:16:42.191 13:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:16:42.191 13:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:16:42.191 13:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:42.191 13:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79181 00:16:42.191 13:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79181 /var/tmp/bperf.sock 00:16:42.191 13:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:42.191 13:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79181 ']' 00:16:42.191 13:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:42.191 13:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:42.191 13:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:42.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:42.191 13:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:42.191 13:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:42.449 [2024-07-25 13:12:34.425969] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:42.449 [2024-07-25 13:12:34.426244] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79181 ] 00:16:42.449 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:42.449 Zero copy mechanism will not be used. 00:16:42.449 [2024-07-25 13:12:34.565179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.708 [2024-07-25 13:12:34.681274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.274 13:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:43.274 13:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:16:43.274 13:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:43.274 13:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:43.275 13:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:43.533 [2024-07-25 13:12:35.688481] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:43.791 13:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:43.791 13:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:44.050 nvme0n1 00:16:44.050 13:12:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:44.050 13:12:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:44.050 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:44.050 Zero copy mechanism will not be used. 00:16:44.050 Running I/O for 2 seconds... 00:16:46.586 00:16:46.586 Latency(us) 00:16:46.586 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.586 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:46.586 nvme0n1 : 2.00 8663.61 1082.95 0.00 0.00 1843.86 1623.51 6047.19 00:16:46.586 =================================================================================================================== 00:16:46.586 Total : 8663.61 1082.95 0.00 0.00 1843.86 1623.51 6047.19 00:16:46.586 0 00:16:46.586 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:46.586 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:46.586 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:46.587 | select(.opcode=="crc32c") 00:16:46.587 | "\(.module_name) \(.executed)"' 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79181 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79181 ']' 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79181 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79181 00:16:46.587 killing process with pid 79181 00:16:46.587 Received shutdown signal, test time was about 2.000000 seconds 00:16:46.587 00:16:46.587 Latency(us) 00:16:46.587 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.587 =================================================================================================================== 00:16:46.587 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79181' 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79181 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79181 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79242 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79242 /var/tmp/bperf.sock 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79242 ']' 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:46.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:46.587 13:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:46.847 [2024-07-25 13:12:38.805657] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:46.847 [2024-07-25 13:12:38.805753] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79242 ] 00:16:46.847 [2024-07-25 13:12:38.945421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.107 [2024-07-25 13:12:39.040547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.673 13:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:47.673 13:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:16:47.673 13:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:47.673 13:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:47.673 13:12:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:47.930 [2024-07-25 13:12:40.106862] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:48.188 13:12:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:48.188 13:12:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:48.446 nvme0n1 00:16:48.446 13:12:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:48.446 13:12:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:48.446 Running I/O for 2 seconds... 00:16:50.978 00:16:50.978 Latency(us) 00:16:50.978 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.978 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:50.978 nvme0n1 : 2.01 16274.04 63.57 0.00 0.00 7858.08 4647.10 16562.73 00:16:50.978 =================================================================================================================== 00:16:50.978 Total : 16274.04 63.57 0.00 0.00 7858.08 4647.10 16562.73 00:16:50.978 0 00:16:50.978 13:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:50.978 13:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:50.978 13:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:50.978 13:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:50.978 13:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:50.978 | select(.opcode=="crc32c") 00:16:50.978 | "\(.module_name) \(.executed)"' 00:16:50.978 13:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:50.978 13:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:50.978 13:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:50.978 13:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:50.978 13:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79242 00:16:50.978 13:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79242 ']' 00:16:50.978 13:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79242 00:16:50.978 13:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:16:50.978 13:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:50.978 13:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79242 00:16:50.978 killing process with pid 79242 00:16:50.978 Received shutdown signal, test time was about 2.000000 seconds 00:16:50.978 00:16:50.978 Latency(us) 00:16:50.978 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.978 =================================================================================================================== 00:16:50.978 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:50.979 13:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:50.979 13:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:50.979 13:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79242' 00:16:50.979 13:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79242 00:16:50.979 13:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79242 00:16:50.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:50.979 13:12:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:16:50.979 13:12:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:50.979 13:12:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:50.979 13:12:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:16:50.979 13:12:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:16:50.979 13:12:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:16:50.979 13:12:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:50.979 13:12:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79302 00:16:50.979 13:12:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79302 /var/tmp/bperf.sock 00:16:50.979 13:12:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79302 ']' 00:16:50.979 13:12:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:50.979 13:12:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:50.979 13:12:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:50.979 13:12:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:50.979 13:12:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:50.979 13:12:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:51.238 [2024-07-25 13:12:43.198292] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:51.238 [2024-07-25 13:12:43.198422] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:16:51.238 Zero copy mechanism will not be used. 00:16:51.238 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79302 ] 00:16:51.238 [2024-07-25 13:12:43.334350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.497 [2024-07-25 13:12:43.453247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.064 13:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:52.064 13:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:16:52.064 13:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:52.064 13:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:52.064 13:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:52.323 [2024-07-25 13:12:44.465823] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:52.581 13:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:52.581 13:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:52.839 nvme0n1 00:16:52.839 13:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:52.839 13:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:52.839 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:52.839 Zero copy mechanism will not be used. 00:16:52.839 Running I/O for 2 seconds... 00:16:54.835 00:16:54.835 Latency(us) 00:16:54.835 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.835 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:54.835 nvme0n1 : 2.00 6411.62 801.45 0.00 0.00 2489.54 1906.50 12451.84 00:16:54.835 =================================================================================================================== 00:16:54.835 Total : 6411.62 801.45 0.00 0.00 2489.54 1906.50 12451.84 00:16:54.835 0 00:16:54.835 13:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:54.835 13:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:54.835 13:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:54.835 13:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:54.835 13:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:54.835 | select(.opcode=="crc32c") 00:16:54.835 | "\(.module_name) \(.executed)"' 00:16:55.092 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:55.092 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:55.092 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:55.092 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:55.092 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79302 00:16:55.092 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79302 ']' 00:16:55.092 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79302 00:16:55.092 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:16:55.092 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:55.092 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79302 00:16:55.351 killing process with pid 79302 00:16:55.351 Received shutdown signal, test time was about 2.000000 seconds 00:16:55.351 00:16:55.351 Latency(us) 00:16:55.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.351 =================================================================================================================== 00:16:55.351 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:55.351 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:55.351 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:55.351 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79302' 00:16:55.351 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79302 00:16:55.351 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79302 00:16:55.351 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 79089 00:16:55.351 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79089 ']' 00:16:55.351 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79089 00:16:55.351 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:16:55.351 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:55.351 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79089 00:16:55.609 killing process with pid 79089 00:16:55.609 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:55.609 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:55.609 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79089' 00:16:55.609 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79089 00:16:55.609 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79089 00:16:55.609 00:16:55.609 real 0m18.932s 00:16:55.609 user 0m36.561s 00:16:55.609 sys 0m4.899s 00:16:55.609 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:55.609 ************************************ 00:16:55.609 END TEST nvmf_digest_clean 00:16:55.609 ************************************ 00:16:55.609 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:55.868 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:16:55.868 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:55.868 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:55.868 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:16:55.868 ************************************ 00:16:55.868 START TEST nvmf_digest_error 00:16:55.868 ************************************ 00:16:55.868 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:16:55.868 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:16:55.868 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:55.868 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:55.868 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:55.868 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=79391 00:16:55.868 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 79391 00:16:55.868 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79391 ']' 00:16:55.868 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:55.868 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.868 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:55.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.868 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.868 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:55.868 13:12:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:55.868 [2024-07-25 13:12:47.900142] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:55.868 [2024-07-25 13:12:47.900247] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:55.868 [2024-07-25 13:12:48.032936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.126 [2024-07-25 13:12:48.154885] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.126 [2024-07-25 13:12:48.154973] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.126 [2024-07-25 13:12:48.155000] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.126 [2024-07-25 13:12:48.155009] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.126 [2024-07-25 13:12:48.155016] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.126 [2024-07-25 13:12:48.155045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.692 13:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:56.692 13:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:16:56.692 13:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:56.692 13:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:56.692 13:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:56.692 13:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:56.692 13:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:16:56.692 13:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.692 13:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:56.692 [2024-07-25 13:12:48.875604] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:16:56.692 13:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.692 13:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:16:56.692 13:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:16:56.692 13:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.693 13:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:56.968 [2024-07-25 13:12:48.940751] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:56.968 null0 00:16:56.968 [2024-07-25 13:12:48.991648] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:56.968 [2024-07-25 13:12:49.015769] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.968 13:12:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.968 13:12:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:16:56.968 13:12:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:56.968 13:12:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:16:56.968 13:12:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:16:56.968 13:12:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:16:56.968 13:12:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79423 00:16:56.968 13:12:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79423 /var/tmp/bperf.sock 00:16:56.968 13:12:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79423 ']' 00:16:56.968 13:12:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:16:56.968 13:12:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:56.968 13:12:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:56.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:56.969 13:12:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:56.969 13:12:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:56.969 13:12:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:56.969 [2024-07-25 13:12:49.077760] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:56.969 [2024-07-25 13:12:49.077876] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79423 ] 00:16:57.227 [2024-07-25 13:12:49.216557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.227 [2024-07-25 13:12:49.350353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.227 [2024-07-25 13:12:49.409429] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:58.160 13:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:58.160 13:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:16:58.160 13:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:58.160 13:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:58.160 13:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:58.160 13:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.160 13:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:58.160 13:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.160 13:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:58.160 13:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:58.418 nvme0n1 00:16:58.418 13:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:58.418 13:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.419 13:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:58.419 13:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.419 13:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:58.419 13:12:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:58.677 Running I/O for 2 seconds... 00:16:58.677 [2024-07-25 13:12:50.675313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:58.677 [2024-07-25 13:12:50.675380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.677 [2024-07-25 13:12:50.675413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.677 [2024-07-25 13:12:50.692920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:58.677 [2024-07-25 13:12:50.692966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.677 [2024-07-25 13:12:50.692981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.677 [2024-07-25 13:12:50.711107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:58.677 [2024-07-25 13:12:50.711182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.677 [2024-07-25 13:12:50.711197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.677 [2024-07-25 13:12:50.728971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:58.677 [2024-07-25 13:12:50.729014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.677 [2024-07-25 13:12:50.729029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.677 [2024-07-25 13:12:50.746471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:58.677 [2024-07-25 13:12:50.746513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.677 [2024-07-25 13:12:50.746529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.677 [2024-07-25 13:12:50.764483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:58.677 [2024-07-25 13:12:50.764526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.677 [2024-07-25 13:12:50.764541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.678 [2024-07-25 13:12:50.782571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:58.678 [2024-07-25 13:12:50.782642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.678 [2024-07-25 13:12:50.782671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.678 [2024-07-25 13:12:50.799339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:58.678 [2024-07-25 13:12:50.799397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.678 [2024-07-25 13:12:50.799411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.678 [2024-07-25 13:12:50.817224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:58.678 [2024-07-25 13:12:50.817267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.678 [2024-07-25 13:12:50.817282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.678 [2024-07-25 13:12:50.834995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:58.678 [2024-07-25 13:12:50.835036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.678 [2024-07-25 13:12:50.835052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.678 [2024-07-25 13:12:50.852670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:58.678 [2024-07-25 13:12:50.852735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.678 [2024-07-25 13:12:50.852751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.937 [2024-07-25 13:12:50.870185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:58.937 [2024-07-25 13:12:50.870228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.937 [2024-07-25 13:12:50.870243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.937 [2024-07-25 13:12:50.887497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:58.937 [2024-07-25 13:12:50.887539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.937 [2024-07-25 13:12:50.887553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.937 [2024-07-25 13:12:50.904695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:58.937 [2024-07-25 13:12:50.904775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.937 [2024-07-25 13:12:50.904790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.937 [2024-07-25 13:12:50.922513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:58.937 [2024-07-25 13:12:50.922586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.937 [2024-07-25 13:12:50.922617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.937 [2024-07-25 13:12:50.939627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:58.937 [2024-07-25 13:12:50.939683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.937 [2024-07-25 13:12:50.939713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.937 [2024-07-25 13:12:50.955555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:58.937 [2024-07-25 13:12:50.955608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.937 [2024-07-25 13:12:50.955638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.937 [2024-07-25 13:12:50.970962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:58.937 [2024-07-25 13:12:50.971018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.937 [2024-07-25 13:12:50.971032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.937 [2024-07-25 13:12:50.988425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:58.937 [2024-07-25 13:12:50.988481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.937 [2024-07-25 13:12:50.988513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.937 [2024-07-25 13:12:51.006572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:58.937 [2024-07-25 13:12:51.006646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.937 [2024-07-25 13:12:51.006682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.937 [2024-07-25 13:12:51.024647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:58.937 [2024-07-25 13:12:51.024702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.937 [2024-07-25 13:12:51.024757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.937 [2024-07-25 13:12:51.042581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:58.937 [2024-07-25 13:12:51.042636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.937 [2024-07-25 13:12:51.042666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.937 [2024-07-25 13:12:51.060008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:58.937 [2024-07-25 13:12:51.060078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.937 [2024-07-25 13:12:51.060109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.937 [2024-07-25 13:12:51.076187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:58.937 [2024-07-25 13:12:51.076241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.937 [2024-07-25 13:12:51.076271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.937 [2024-07-25 13:12:51.091652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:58.937 [2024-07-25 13:12:51.091705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.937 [2024-07-25 13:12:51.091735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.937 [2024-07-25 13:12:51.107517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:58.937 [2024-07-25 13:12:51.107570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.937 [2024-07-25 13:12:51.107599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:58.937 [2024-07-25 13:12:51.122530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:58.937 [2024-07-25 13:12:51.122585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.937 [2024-07-25 13:12:51.122614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.196 [2024-07-25 13:12:51.137513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.196 [2024-07-25 13:12:51.137571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.196 [2024-07-25 13:12:51.137585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.196 [2024-07-25 13:12:51.154322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.196 [2024-07-25 13:12:51.154378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.196 [2024-07-25 13:12:51.154408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.196 [2024-07-25 13:12:51.171876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.196 [2024-07-25 13:12:51.171938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.196 [2024-07-25 13:12:51.171967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.196 [2024-07-25 13:12:51.189100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.196 [2024-07-25 13:12:51.189154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.196 [2024-07-25 13:12:51.189184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.196 [2024-07-25 13:12:51.206522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.196 [2024-07-25 13:12:51.206577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.196 [2024-07-25 13:12:51.206607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.196 [2024-07-25 13:12:51.224625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.196 [2024-07-25 13:12:51.224678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.196 [2024-07-25 13:12:51.224707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.196 [2024-07-25 13:12:51.242320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.196 [2024-07-25 13:12:51.242376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.196 [2024-07-25 13:12:51.242407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.196 [2024-07-25 13:12:51.259072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.196 [2024-07-25 13:12:51.259126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.196 [2024-07-25 13:12:51.259155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.196 [2024-07-25 13:12:51.276977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.196 [2024-07-25 13:12:51.277023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.196 [2024-07-25 13:12:51.277039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.197 [2024-07-25 13:12:51.295360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.197 [2024-07-25 13:12:51.295420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.197 [2024-07-25 13:12:51.295437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.197 [2024-07-25 13:12:51.313951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.197 [2024-07-25 13:12:51.314040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.197 [2024-07-25 13:12:51.314056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.197 [2024-07-25 13:12:51.333012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.197 [2024-07-25 13:12:51.333083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.197 [2024-07-25 13:12:51.333098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.197 [2024-07-25 13:12:51.351152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.197 [2024-07-25 13:12:51.351214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.197 [2024-07-25 13:12:51.351230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.197 [2024-07-25 13:12:51.368505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.197 [2024-07-25 13:12:51.368547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.197 [2024-07-25 13:12:51.368598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.456 [2024-07-25 13:12:51.387139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.456 [2024-07-25 13:12:51.387213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.456 [2024-07-25 13:12:51.387229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.456 [2024-07-25 13:12:51.405400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.456 [2024-07-25 13:12:51.405446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.456 [2024-07-25 13:12:51.405461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.456 [2024-07-25 13:12:51.423357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.456 [2024-07-25 13:12:51.423399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.456 [2024-07-25 13:12:51.423413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.456 [2024-07-25 13:12:51.441297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.456 [2024-07-25 13:12:51.441341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.456 [2024-07-25 13:12:51.441356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.456 [2024-07-25 13:12:51.459178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.456 [2024-07-25 13:12:51.459220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.456 [2024-07-25 13:12:51.459235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.456 [2024-07-25 13:12:51.476633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.456 [2024-07-25 13:12:51.476688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.456 [2024-07-25 13:12:51.476727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.456 [2024-07-25 13:12:51.493850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.456 [2024-07-25 13:12:51.493915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.456 [2024-07-25 13:12:51.493945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.456 [2024-07-25 13:12:51.511505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.456 [2024-07-25 13:12:51.511559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.456 [2024-07-25 13:12:51.511589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.456 [2024-07-25 13:12:51.528918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.456 [2024-07-25 13:12:51.528968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.456 [2024-07-25 13:12:51.528983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.456 [2024-07-25 13:12:51.546247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.456 [2024-07-25 13:12:51.546306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.456 [2024-07-25 13:12:51.546321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.456 [2024-07-25 13:12:51.563647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.456 [2024-07-25 13:12:51.563702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.456 [2024-07-25 13:12:51.563733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.456 [2024-07-25 13:12:51.578854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.456 [2024-07-25 13:12:51.578906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.456 [2024-07-25 13:12:51.578919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.456 [2024-07-25 13:12:51.594124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.456 [2024-07-25 13:12:51.594166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.456 [2024-07-25 13:12:51.594181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.456 [2024-07-25 13:12:51.610001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.456 [2024-07-25 13:12:51.610058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.457 [2024-07-25 13:12:51.610088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.457 [2024-07-25 13:12:51.626539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.457 [2024-07-25 13:12:51.626599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.457 [2024-07-25 13:12:51.626630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.457 [2024-07-25 13:12:51.643785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.457 [2024-07-25 13:12:51.643921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.457 [2024-07-25 13:12:51.643946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.716 [2024-07-25 13:12:51.664838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.716 [2024-07-25 13:12:51.664916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.716 [2024-07-25 13:12:51.664939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.716 [2024-07-25 13:12:51.682371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.716 [2024-07-25 13:12:51.682430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.716 [2024-07-25 13:12:51.682463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.716 [2024-07-25 13:12:51.699128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.716 [2024-07-25 13:12:51.699183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.716 [2024-07-25 13:12:51.699213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.716 [2024-07-25 13:12:51.715759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.716 [2024-07-25 13:12:51.715816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.716 [2024-07-25 13:12:51.715857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.716 [2024-07-25 13:12:51.732912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.716 [2024-07-25 13:12:51.732956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.716 [2024-07-25 13:12:51.732971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.716 [2024-07-25 13:12:51.750531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.716 [2024-07-25 13:12:51.750609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.716 [2024-07-25 13:12:51.750640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.716 [2024-07-25 13:12:51.776248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.716 [2024-07-25 13:12:51.776293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.716 [2024-07-25 13:12:51.776310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.716 [2024-07-25 13:12:51.794259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.716 [2024-07-25 13:12:51.794300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.716 [2024-07-25 13:12:51.794330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.716 [2024-07-25 13:12:51.811142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.716 [2024-07-25 13:12:51.811223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.716 [2024-07-25 13:12:51.811254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.716 [2024-07-25 13:12:51.827824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.716 [2024-07-25 13:12:51.827889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.716 [2024-07-25 13:12:51.827920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.716 [2024-07-25 13:12:51.846260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.716 [2024-07-25 13:12:51.846300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.716 [2024-07-25 13:12:51.846331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.716 [2024-07-25 13:12:51.865155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.716 [2024-07-25 13:12:51.865201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.716 [2024-07-25 13:12:51.865217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.716 [2024-07-25 13:12:51.881685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.716 [2024-07-25 13:12:51.881751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.716 [2024-07-25 13:12:51.881781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.716 [2024-07-25 13:12:51.900960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.716 [2024-07-25 13:12:51.901007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.716 [2024-07-25 13:12:51.901024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.976 [2024-07-25 13:12:51.920608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.976 [2024-07-25 13:12:51.920669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.976 [2024-07-25 13:12:51.920693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.976 [2024-07-25 13:12:51.939908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.976 [2024-07-25 13:12:51.939968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.976 [2024-07-25 13:12:51.939992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.976 [2024-07-25 13:12:51.959627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.976 [2024-07-25 13:12:51.959689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.976 [2024-07-25 13:12:51.959709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.976 [2024-07-25 13:12:51.981155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.976 [2024-07-25 13:12:51.981232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.976 [2024-07-25 13:12:51.981268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.976 [2024-07-25 13:12:52.002811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.976 [2024-07-25 13:12:52.002923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.976 [2024-07-25 13:12:52.002954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.976 [2024-07-25 13:12:52.023884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.976 [2024-07-25 13:12:52.023943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.976 [2024-07-25 13:12:52.023962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.976 [2024-07-25 13:12:52.043013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.976 [2024-07-25 13:12:52.043068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.976 [2024-07-25 13:12:52.043087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.976 [2024-07-25 13:12:52.062143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.976 [2024-07-25 13:12:52.062199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.976 [2024-07-25 13:12:52.062217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.976 [2024-07-25 13:12:52.081301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.976 [2024-07-25 13:12:52.081359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.976 [2024-07-25 13:12:52.081377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.976 [2024-07-25 13:12:52.100487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.976 [2024-07-25 13:12:52.100539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.976 [2024-07-25 13:12:52.100558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.976 [2024-07-25 13:12:52.118141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.976 [2024-07-25 13:12:52.118181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.976 [2024-07-25 13:12:52.118197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.976 [2024-07-25 13:12:52.134998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.976 [2024-07-25 13:12:52.135038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.976 [2024-07-25 13:12:52.135054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:59.976 [2024-07-25 13:12:52.151866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:16:59.976 [2024-07-25 13:12:52.151906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.976 [2024-07-25 13:12:52.151921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.236 [2024-07-25 13:12:52.168893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:17:00.236 [2024-07-25 13:12:52.168933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.236 [2024-07-25 13:12:52.168947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.236 [2024-07-25 13:12:52.185806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:17:00.236 [2024-07-25 13:12:52.185875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.236 [2024-07-25 13:12:52.185890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.236 [2024-07-25 13:12:52.202784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:17:00.236 [2024-07-25 13:12:52.202828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.236 [2024-07-25 13:12:52.202854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.236 [2024-07-25 13:12:52.219882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:17:00.236 [2024-07-25 13:12:52.219922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.236 [2024-07-25 13:12:52.219937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.236 [2024-07-25 13:12:52.236781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:17:00.236 [2024-07-25 13:12:52.236823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.236 [2024-07-25 13:12:52.236850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.236 [2024-07-25 13:12:52.254051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:17:00.236 [2024-07-25 13:12:52.254091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.236 [2024-07-25 13:12:52.254105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.236 [2024-07-25 13:12:52.271457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:17:00.236 [2024-07-25 13:12:52.271498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.236 [2024-07-25 13:12:52.271512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.236 [2024-07-25 13:12:52.288276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:17:00.236 [2024-07-25 13:12:52.288316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.236 [2024-07-25 13:12:52.288331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.236 [2024-07-25 13:12:52.305177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:17:00.236 [2024-07-25 13:12:52.305216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.236 [2024-07-25 13:12:52.305230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.236 [2024-07-25 13:12:52.322162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:17:00.236 [2024-07-25 13:12:52.322201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.236 [2024-07-25 13:12:52.322216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.236 [2024-07-25 13:12:52.339035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:17:00.236 [2024-07-25 13:12:52.339074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.236 [2024-07-25 13:12:52.339089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.236 [2024-07-25 13:12:52.355984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:17:00.236 [2024-07-25 13:12:52.356022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.236 [2024-07-25 13:12:52.356036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.236 [2024-07-25 13:12:52.372996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:17:00.236 [2024-07-25 13:12:52.373043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.236 [2024-07-25 13:12:52.373058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.236 [2024-07-25 13:12:52.389930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:17:00.236 [2024-07-25 13:12:52.389971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.236 [2024-07-25 13:12:52.389985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.236 [2024-07-25 13:12:52.406866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:17:00.236 [2024-07-25 13:12:52.406914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.236 [2024-07-25 13:12:52.406929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.236 [2024-07-25 13:12:52.423819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:17:00.237 [2024-07-25 13:12:52.423876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.237 [2024-07-25 13:12:52.423890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.496 [2024-07-25 13:12:52.440790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:17:00.496 [2024-07-25 13:12:52.440847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.496 [2024-07-25 13:12:52.440863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.496 [2024-07-25 13:12:52.457732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:17:00.496 [2024-07-25 13:12:52.457778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.496 [2024-07-25 13:12:52.457792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.496 [2024-07-25 13:12:52.474736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:17:00.496 [2024-07-25 13:12:52.474781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.496 [2024-07-25 13:12:52.474795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.496 [2024-07-25 13:12:52.491747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:17:00.496 [2024-07-25 13:12:52.491796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.496 [2024-07-25 13:12:52.491811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.496 [2024-07-25 13:12:52.508712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:17:00.496 [2024-07-25 13:12:52.508773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.496 [2024-07-25 13:12:52.508788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.496 [2024-07-25 13:12:52.525958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:17:00.496 [2024-07-25 13:12:52.526000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.496 [2024-07-25 13:12:52.526014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.496 [2024-07-25 13:12:52.542981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:17:00.496 [2024-07-25 13:12:52.543027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.496 [2024-07-25 13:12:52.543042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.496 [2024-07-25 13:12:52.560013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:17:00.496 [2024-07-25 13:12:52.560066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.496 [2024-07-25 13:12:52.560082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.496 [2024-07-25 13:12:52.577011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:17:00.496 [2024-07-25 13:12:52.577055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.496 [2024-07-25 13:12:52.577069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.496 [2024-07-25 13:12:52.594122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:17:00.496 [2024-07-25 13:12:52.594182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.496 [2024-07-25 13:12:52.594207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.496 [2024-07-25 13:12:52.611080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:17:00.496 [2024-07-25 13:12:52.611126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.496 [2024-07-25 13:12:52.611141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.496 [2024-07-25 13:12:52.628035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:17:00.496 [2024-07-25 13:12:52.628077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.496 [2024-07-25 13:12:52.628092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.496 [2024-07-25 13:12:52.644957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xae94f0) 00:17:00.496 [2024-07-25 13:12:52.645000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.496 [2024-07-25 13:12:52.645015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.496 00:17:00.497 Latency(us) 00:17:00.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.497 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:00.497 nvme0n1 : 2.01 14443.18 56.42 0.00 0.00 8855.65 7119.59 34793.66 00:17:00.497 =================================================================================================================== 00:17:00.497 Total : 14443.18 56.42 0.00 0.00 8855.65 7119.59 34793.66 00:17:00.497 0 00:17:00.497 13:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:00.497 13:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:00.497 | .driver_specific 00:17:00.497 | .nvme_error 00:17:00.497 | .status_code 00:17:00.497 | .command_transient_transport_error' 00:17:00.497 13:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:00.497 13:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:00.755 13:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 113 > 0 )) 00:17:00.755 13:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79423 00:17:00.755 13:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79423 ']' 00:17:00.755 13:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79423 00:17:00.755 13:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:17:00.755 13:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:00.755 13:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79423 00:17:00.755 13:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:00.755 13:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:00.755 killing process with pid 79423 00:17:00.755 13:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79423' 00:17:00.755 13:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79423 00:17:00.755 Received shutdown signal, test time was about 2.000000 seconds 00:17:00.755 00:17:00.755 Latency(us) 00:17:00.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.755 =================================================================================================================== 00:17:00.755 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:00.755 13:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79423 00:17:01.014 13:12:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:17:01.014 13:12:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:01.014 13:12:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:01.014 13:12:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:01.014 13:12:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:01.014 13:12:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79478 00:17:01.014 13:12:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79478 /var/tmp/bperf.sock 00:17:01.014 13:12:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:17:01.014 13:12:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79478 ']' 00:17:01.014 13:12:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:01.014 13:12:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:01.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:01.014 13:12:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:01.014 13:12:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:01.014 13:12:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:01.272 [2024-07-25 13:12:53.221396] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:01.272 [2024-07-25 13:12:53.221506] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79478 ] 00:17:01.272 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:01.272 Zero copy mechanism will not be used. 00:17:01.272 [2024-07-25 13:12:53.360494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.531 [2024-07-25 13:12:53.467963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.531 [2024-07-25 13:12:53.525985] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:02.097 13:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:02.097 13:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:17:02.097 13:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:02.097 13:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:02.356 13:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:02.356 13:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.356 13:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:02.356 13:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.356 13:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:02.356 13:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:02.615 nvme0n1 00:17:02.615 13:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:02.615 13:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.615 13:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:02.615 13:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.615 13:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:02.615 13:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:02.875 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:02.875 Zero copy mechanism will not be used. 00:17:02.875 Running I/O for 2 seconds... 00:17:02.875 [2024-07-25 13:12:54.858020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.875 [2024-07-25 13:12:54.858107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.875 [2024-07-25 13:12:54.858124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:02.875 [2024-07-25 13:12:54.862511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.875 [2024-07-25 13:12:54.862582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.875 [2024-07-25 13:12:54.862596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:02.875 [2024-07-25 13:12:54.867169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.875 [2024-07-25 13:12:54.867227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.875 [2024-07-25 13:12:54.867242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:02.875 [2024-07-25 13:12:54.871557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.875 [2024-07-25 13:12:54.871610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.875 [2024-07-25 13:12:54.871640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.875 [2024-07-25 13:12:54.875994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.875 [2024-07-25 13:12:54.876047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.875 [2024-07-25 13:12:54.876094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:02.875 [2024-07-25 13:12:54.880236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.875 [2024-07-25 13:12:54.880290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.875 [2024-07-25 13:12:54.880320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:02.875 [2024-07-25 13:12:54.884532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.875 [2024-07-25 13:12:54.884590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.875 [2024-07-25 13:12:54.884604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:02.875 [2024-07-25 13:12:54.888731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.875 [2024-07-25 13:12:54.888787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.875 [2024-07-25 13:12:54.888801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.875 [2024-07-25 13:12:54.892993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.875 [2024-07-25 13:12:54.893050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.875 [2024-07-25 13:12:54.893063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:02.875 [2024-07-25 13:12:54.897485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.875 [2024-07-25 13:12:54.897525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.875 [2024-07-25 13:12:54.897539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:02.875 [2024-07-25 13:12:54.901872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.875 [2024-07-25 13:12:54.901920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.875 [2024-07-25 13:12:54.901935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:02.875 [2024-07-25 13:12:54.906391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.875 [2024-07-25 13:12:54.906431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.875 [2024-07-25 13:12:54.906445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.875 [2024-07-25 13:12:54.911383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.875 [2024-07-25 13:12:54.911450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.875 [2024-07-25 13:12:54.911480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:02.875 [2024-07-25 13:12:54.916091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.875 [2024-07-25 13:12:54.916133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.875 [2024-07-25 13:12:54.916147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:02.875 [2024-07-25 13:12:54.920587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.875 [2024-07-25 13:12:54.920643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.875 [2024-07-25 13:12:54.920656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:02.875 [2024-07-25 13:12:54.925108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.875 [2024-07-25 13:12:54.925177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.875 [2024-07-25 13:12:54.925222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.875 [2024-07-25 13:12:54.929825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.875 [2024-07-25 13:12:54.929902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.875 [2024-07-25 13:12:54.929933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:02.875 [2024-07-25 13:12:54.934422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.876 [2024-07-25 13:12:54.934477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.876 [2024-07-25 13:12:54.934507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:02.876 [2024-07-25 13:12:54.938907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.876 [2024-07-25 13:12:54.938975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.876 [2024-07-25 13:12:54.938989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:02.876 [2024-07-25 13:12:54.943411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.876 [2024-07-25 13:12:54.943465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.876 [2024-07-25 13:12:54.943479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.876 [2024-07-25 13:12:54.947969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.876 [2024-07-25 13:12:54.948025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.876 [2024-07-25 13:12:54.948039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:02.876 [2024-07-25 13:12:54.952409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.876 [2024-07-25 13:12:54.952451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.876 [2024-07-25 13:12:54.952465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:02.876 [2024-07-25 13:12:54.956758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.876 [2024-07-25 13:12:54.956799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.876 [2024-07-25 13:12:54.956813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:02.876 [2024-07-25 13:12:54.961365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.876 [2024-07-25 13:12:54.961406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.876 [2024-07-25 13:12:54.961419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.876 [2024-07-25 13:12:54.966051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.876 [2024-07-25 13:12:54.966107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.876 [2024-07-25 13:12:54.966130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:02.876 [2024-07-25 13:12:54.970662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.876 [2024-07-25 13:12:54.970735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.876 [2024-07-25 13:12:54.970768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:02.876 [2024-07-25 13:12:54.975252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.876 [2024-07-25 13:12:54.975292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.876 [2024-07-25 13:12:54.975306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:02.876 [2024-07-25 13:12:54.979752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.876 [2024-07-25 13:12:54.979807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.876 [2024-07-25 13:12:54.979837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.876 [2024-07-25 13:12:54.984332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.876 [2024-07-25 13:12:54.984373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.876 [2024-07-25 13:12:54.984387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:02.876 [2024-07-25 13:12:54.988793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.876 [2024-07-25 13:12:54.988844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.876 [2024-07-25 13:12:54.988859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:02.876 [2024-07-25 13:12:54.993335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.876 [2024-07-25 13:12:54.993376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.876 [2024-07-25 13:12:54.993390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:02.876 [2024-07-25 13:12:54.997781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.876 [2024-07-25 13:12:54.997822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.876 [2024-07-25 13:12:54.997851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.876 [2024-07-25 13:12:55.002182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.876 [2024-07-25 13:12:55.002223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.876 [2024-07-25 13:12:55.002236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:02.876 [2024-07-25 13:12:55.006605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.876 [2024-07-25 13:12:55.006646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.876 [2024-07-25 13:12:55.006660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:02.876 [2024-07-25 13:12:55.011059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.876 [2024-07-25 13:12:55.011098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.876 [2024-07-25 13:12:55.011111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:02.876 [2024-07-25 13:12:55.015492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.876 [2024-07-25 13:12:55.015594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.876 [2024-07-25 13:12:55.015624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.876 [2024-07-25 13:12:55.020107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.876 [2024-07-25 13:12:55.020148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.876 [2024-07-25 13:12:55.020161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:02.876 [2024-07-25 13:12:55.024817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.876 [2024-07-25 13:12:55.024867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.876 [2024-07-25 13:12:55.024881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:02.876 [2024-07-25 13:12:55.029434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.876 [2024-07-25 13:12:55.029490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.876 [2024-07-25 13:12:55.029520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:02.876 [2024-07-25 13:12:55.034213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.876 [2024-07-25 13:12:55.034254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.876 [2024-07-25 13:12:55.034268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.876 [2024-07-25 13:12:55.038788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.876 [2024-07-25 13:12:55.038866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.876 [2024-07-25 13:12:55.038880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:02.876 [2024-07-25 13:12:55.043442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.876 [2024-07-25 13:12:55.043514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.876 [2024-07-25 13:12:55.043545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:02.876 [2024-07-25 13:12:55.048103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.876 [2024-07-25 13:12:55.048141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.876 [2024-07-25 13:12:55.048170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:02.876 [2024-07-25 13:12:55.052714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.877 [2024-07-25 13:12:55.052782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.877 [2024-07-25 13:12:55.052796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.877 [2024-07-25 13:12:55.057363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.877 [2024-07-25 13:12:55.057420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.877 [2024-07-25 13:12:55.057438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:02.877 [2024-07-25 13:12:55.061991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:02.877 [2024-07-25 13:12:55.062059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.877 [2024-07-25 13:12:55.062073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.137 [2024-07-25 13:12:55.066697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.137 [2024-07-25 13:12:55.066754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.137 [2024-07-25 13:12:55.066784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.137 [2024-07-25 13:12:55.071180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.137 [2024-07-25 13:12:55.071250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.137 [2024-07-25 13:12:55.071281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.137 [2024-07-25 13:12:55.075805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.137 [2024-07-25 13:12:55.075867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.137 [2024-07-25 13:12:55.075897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.137 [2024-07-25 13:12:55.080476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.137 [2024-07-25 13:12:55.080531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.137 [2024-07-25 13:12:55.080545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.137 [2024-07-25 13:12:55.084991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.137 [2024-07-25 13:12:55.085073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.137 [2024-07-25 13:12:55.085087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.137 [2024-07-25 13:12:55.089767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.137 [2024-07-25 13:12:55.089820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.137 [2024-07-25 13:12:55.089861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.137 [2024-07-25 13:12:55.094129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.137 [2024-07-25 13:12:55.094181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.137 [2024-07-25 13:12:55.094226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.137 [2024-07-25 13:12:55.098610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.137 [2024-07-25 13:12:55.098665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.138 [2024-07-25 13:12:55.098695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.138 [2024-07-25 13:12:55.102926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.138 [2024-07-25 13:12:55.102978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.138 [2024-07-25 13:12:55.103007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.138 [2024-07-25 13:12:55.107324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.138 [2024-07-25 13:12:55.107377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.138 [2024-07-25 13:12:55.107407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.138 [2024-07-25 13:12:55.111736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.138 [2024-07-25 13:12:55.111788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.138 [2024-07-25 13:12:55.111817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.138 [2024-07-25 13:12:55.116226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.138 [2024-07-25 13:12:55.116265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.138 [2024-07-25 13:12:55.116279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.138 [2024-07-25 13:12:55.120929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.138 [2024-07-25 13:12:55.120970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.138 [2024-07-25 13:12:55.120984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.138 [2024-07-25 13:12:55.125334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.138 [2024-07-25 13:12:55.125390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.138 [2024-07-25 13:12:55.125403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.138 [2024-07-25 13:12:55.130109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.138 [2024-07-25 13:12:55.130146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.138 [2024-07-25 13:12:55.130175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.138 [2024-07-25 13:12:55.134616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.138 [2024-07-25 13:12:55.134669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.138 [2024-07-25 13:12:55.134698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.138 [2024-07-25 13:12:55.139380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.138 [2024-07-25 13:12:55.139434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.138 [2024-07-25 13:12:55.139458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.138 [2024-07-25 13:12:55.144055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.138 [2024-07-25 13:12:55.144126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.138 [2024-07-25 13:12:55.144140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.138 [2024-07-25 13:12:55.148613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.138 [2024-07-25 13:12:55.148651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.138 [2024-07-25 13:12:55.148681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.138 [2024-07-25 13:12:55.153213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.138 [2024-07-25 13:12:55.153252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.138 [2024-07-25 13:12:55.153266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.138 [2024-07-25 13:12:55.157797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.138 [2024-07-25 13:12:55.157874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.138 [2024-07-25 13:12:55.157888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.138 [2024-07-25 13:12:55.162627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.138 [2024-07-25 13:12:55.162680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.138 [2024-07-25 13:12:55.162707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.138 [2024-07-25 13:12:55.167146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.138 [2024-07-25 13:12:55.167215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.138 [2024-07-25 13:12:55.167244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.138 [2024-07-25 13:12:55.171480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.138 [2024-07-25 13:12:55.171532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.138 [2024-07-25 13:12:55.171561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.138 [2024-07-25 13:12:55.175954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.138 [2024-07-25 13:12:55.176004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.138 [2024-07-25 13:12:55.176032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.138 [2024-07-25 13:12:55.180278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.138 [2024-07-25 13:12:55.180331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.138 [2024-07-25 13:12:55.180360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.138 [2024-07-25 13:12:55.184528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.138 [2024-07-25 13:12:55.184596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.138 [2024-07-25 13:12:55.184624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.138 [2024-07-25 13:12:55.189355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.138 [2024-07-25 13:12:55.189410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.138 [2024-07-25 13:12:55.189439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.138 [2024-07-25 13:12:55.193735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.138 [2024-07-25 13:12:55.193794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.138 [2024-07-25 13:12:55.193823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.138 [2024-07-25 13:12:55.198257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.138 [2024-07-25 13:12:55.198296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.138 [2024-07-25 13:12:55.198310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.138 [2024-07-25 13:12:55.202775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.138 [2024-07-25 13:12:55.202826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.138 [2024-07-25 13:12:55.202864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.138 [2024-07-25 13:12:55.207493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.138 [2024-07-25 13:12:55.207560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.138 [2024-07-25 13:12:55.207590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.138 [2024-07-25 13:12:55.212323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.138 [2024-07-25 13:12:55.212363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.138 [2024-07-25 13:12:55.212377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.138 [2024-07-25 13:12:55.217180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.138 [2024-07-25 13:12:55.217221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.138 [2024-07-25 13:12:55.217234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.139 [2024-07-25 13:12:55.221783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.139 [2024-07-25 13:12:55.221822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.139 [2024-07-25 13:12:55.221865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.139 [2024-07-25 13:12:55.226124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.139 [2024-07-25 13:12:55.226170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.139 [2024-07-25 13:12:55.226215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.139 [2024-07-25 13:12:55.230664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.139 [2024-07-25 13:12:55.230717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.139 [2024-07-25 13:12:55.230745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.139 [2024-07-25 13:12:55.235068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.139 [2024-07-25 13:12:55.235119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.139 [2024-07-25 13:12:55.235148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.139 [2024-07-25 13:12:55.239792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.139 [2024-07-25 13:12:55.239871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.139 [2024-07-25 13:12:55.239886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.139 [2024-07-25 13:12:55.244332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.139 [2024-07-25 13:12:55.244372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.139 [2024-07-25 13:12:55.244386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.139 [2024-07-25 13:12:55.248955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.139 [2024-07-25 13:12:55.248995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.139 [2024-07-25 13:12:55.249009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.139 [2024-07-25 13:12:55.253373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.139 [2024-07-25 13:12:55.253419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.139 [2024-07-25 13:12:55.253432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.139 [2024-07-25 13:12:55.257800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.139 [2024-07-25 13:12:55.257864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.139 [2024-07-25 13:12:55.257895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.139 [2024-07-25 13:12:55.262369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.139 [2024-07-25 13:12:55.262415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.139 [2024-07-25 13:12:55.262429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.139 [2024-07-25 13:12:55.266921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.139 [2024-07-25 13:12:55.266981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.139 [2024-07-25 13:12:55.267010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.139 [2024-07-25 13:12:55.271309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.139 [2024-07-25 13:12:55.271350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.139 [2024-07-25 13:12:55.271364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.139 [2024-07-25 13:12:55.275933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.139 [2024-07-25 13:12:55.275993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.139 [2024-07-25 13:12:55.276021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.139 [2024-07-25 13:12:55.280514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.139 [2024-07-25 13:12:55.280569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.139 [2024-07-25 13:12:55.280598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.139 [2024-07-25 13:12:55.285285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.139 [2024-07-25 13:12:55.285337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.139 [2024-07-25 13:12:55.285351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.139 [2024-07-25 13:12:55.289985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.139 [2024-07-25 13:12:55.290052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.139 [2024-07-25 13:12:55.290083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.139 [2024-07-25 13:12:55.294617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.139 [2024-07-25 13:12:55.294689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.139 [2024-07-25 13:12:55.294703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.139 [2024-07-25 13:12:55.299208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.139 [2024-07-25 13:12:55.299248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.139 [2024-07-25 13:12:55.299261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.139 [2024-07-25 13:12:55.303778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.139 [2024-07-25 13:12:55.303859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.139 [2024-07-25 13:12:55.303875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.139 [2024-07-25 13:12:55.308310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.139 [2024-07-25 13:12:55.308376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.139 [2024-07-25 13:12:55.308390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.139 [2024-07-25 13:12:55.313129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.139 [2024-07-25 13:12:55.313185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.139 [2024-07-25 13:12:55.313210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.139 [2024-07-25 13:12:55.317689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.139 [2024-07-25 13:12:55.317762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.139 [2024-07-25 13:12:55.317776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.139 [2024-07-25 13:12:55.322427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.139 [2024-07-25 13:12:55.322498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.139 [2024-07-25 13:12:55.322528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.400 [2024-07-25 13:12:55.327073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.400 [2024-07-25 13:12:55.327131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.400 [2024-07-25 13:12:55.327144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.400 [2024-07-25 13:12:55.331673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.400 [2024-07-25 13:12:55.331740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.400 [2024-07-25 13:12:55.331768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.400 [2024-07-25 13:12:55.336537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.400 [2024-07-25 13:12:55.336604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.400 [2024-07-25 13:12:55.336632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.400 [2024-07-25 13:12:55.341147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.400 [2024-07-25 13:12:55.341200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.400 [2024-07-25 13:12:55.341241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.400 [2024-07-25 13:12:55.345982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.400 [2024-07-25 13:12:55.346036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.400 [2024-07-25 13:12:55.346048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.400 [2024-07-25 13:12:55.350757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.400 [2024-07-25 13:12:55.350826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.400 [2024-07-25 13:12:55.350883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.400 [2024-07-25 13:12:55.355326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.400 [2024-07-25 13:12:55.355379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.400 [2024-07-25 13:12:55.355423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.400 [2024-07-25 13:12:55.359796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.400 [2024-07-25 13:12:55.359890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.400 [2024-07-25 13:12:55.359920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.400 [2024-07-25 13:12:55.364810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.400 [2024-07-25 13:12:55.364864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.400 [2024-07-25 13:12:55.364878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.400 [2024-07-25 13:12:55.369692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.400 [2024-07-25 13:12:55.369746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.400 [2024-07-25 13:12:55.369789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.400 [2024-07-25 13:12:55.374607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.400 [2024-07-25 13:12:55.374662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.400 [2024-07-25 13:12:55.374707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.400 [2024-07-25 13:12:55.379339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.400 [2024-07-25 13:12:55.379381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.400 [2024-07-25 13:12:55.379395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.400 [2024-07-25 13:12:55.383996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.400 [2024-07-25 13:12:55.384070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.400 [2024-07-25 13:12:55.384103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.400 [2024-07-25 13:12:55.388792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.400 [2024-07-25 13:12:55.388844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.400 [2024-07-25 13:12:55.388859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.400 [2024-07-25 13:12:55.393360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.400 [2024-07-25 13:12:55.393429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.400 [2024-07-25 13:12:55.393458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.400 [2024-07-25 13:12:55.397998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.401 [2024-07-25 13:12:55.398038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.401 [2024-07-25 13:12:55.398051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.401 [2024-07-25 13:12:55.402360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.401 [2024-07-25 13:12:55.402401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.401 [2024-07-25 13:12:55.402415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.401 [2024-07-25 13:12:55.406663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.401 [2024-07-25 13:12:55.406703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.401 [2024-07-25 13:12:55.406717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.401 [2024-07-25 13:12:55.411082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.401 [2024-07-25 13:12:55.411123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.401 [2024-07-25 13:12:55.411136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.401 [2024-07-25 13:12:55.415597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.401 [2024-07-25 13:12:55.415638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.401 [2024-07-25 13:12:55.415651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.401 [2024-07-25 13:12:55.419937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.401 [2024-07-25 13:12:55.419978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.401 [2024-07-25 13:12:55.419991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.401 [2024-07-25 13:12:55.424369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.401 [2024-07-25 13:12:55.424409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.401 [2024-07-25 13:12:55.424423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.401 [2024-07-25 13:12:55.428846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.401 [2024-07-25 13:12:55.428887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.401 [2024-07-25 13:12:55.428900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.401 [2024-07-25 13:12:55.433063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.401 [2024-07-25 13:12:55.433110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.401 [2024-07-25 13:12:55.433124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.401 [2024-07-25 13:12:55.437459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.401 [2024-07-25 13:12:55.437499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.401 [2024-07-25 13:12:55.437512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.401 [2024-07-25 13:12:55.441667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.401 [2024-07-25 13:12:55.441707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.401 [2024-07-25 13:12:55.441721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.401 [2024-07-25 13:12:55.446032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.401 [2024-07-25 13:12:55.446072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.401 [2024-07-25 13:12:55.446086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.401 [2024-07-25 13:12:55.450375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.401 [2024-07-25 13:12:55.450415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.401 [2024-07-25 13:12:55.450428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.401 [2024-07-25 13:12:55.454749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.401 [2024-07-25 13:12:55.454790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.401 [2024-07-25 13:12:55.454804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.401 [2024-07-25 13:12:55.459070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.401 [2024-07-25 13:12:55.459109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.401 [2024-07-25 13:12:55.459123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.401 [2024-07-25 13:12:55.463528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.401 [2024-07-25 13:12:55.463569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.401 [2024-07-25 13:12:55.463583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.401 [2024-07-25 13:12:55.467814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.401 [2024-07-25 13:12:55.467863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.401 [2024-07-25 13:12:55.467878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.401 [2024-07-25 13:12:55.472262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.401 [2024-07-25 13:12:55.472303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.401 [2024-07-25 13:12:55.472317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.401 [2024-07-25 13:12:55.476553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.401 [2024-07-25 13:12:55.476593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.401 [2024-07-25 13:12:55.476607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.401 [2024-07-25 13:12:55.481030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.401 [2024-07-25 13:12:55.481082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.401 [2024-07-25 13:12:55.481095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.401 [2024-07-25 13:12:55.485360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.401 [2024-07-25 13:12:55.485401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.401 [2024-07-25 13:12:55.485415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.401 [2024-07-25 13:12:55.489658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.401 [2024-07-25 13:12:55.489710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.401 [2024-07-25 13:12:55.489724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.401 [2024-07-25 13:12:55.494102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.401 [2024-07-25 13:12:55.494142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.401 [2024-07-25 13:12:55.494156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.401 [2024-07-25 13:12:55.498527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.401 [2024-07-25 13:12:55.498568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.401 [2024-07-25 13:12:55.498582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.401 [2024-07-25 13:12:55.502827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.401 [2024-07-25 13:12:55.502916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.401 [2024-07-25 13:12:55.502929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.401 [2024-07-25 13:12:55.507448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.401 [2024-07-25 13:12:55.507488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.401 [2024-07-25 13:12:55.507503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.401 [2024-07-25 13:12:55.511707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.401 [2024-07-25 13:12:55.511747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.402 [2024-07-25 13:12:55.511763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.402 [2024-07-25 13:12:55.516145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.402 [2024-07-25 13:12:55.516186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.402 [2024-07-25 13:12:55.516200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.402 [2024-07-25 13:12:55.520469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.402 [2024-07-25 13:12:55.520509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.402 [2024-07-25 13:12:55.520523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.402 [2024-07-25 13:12:55.524926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.402 [2024-07-25 13:12:55.524975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.402 [2024-07-25 13:12:55.524989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.402 [2024-07-25 13:12:55.529282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.402 [2024-07-25 13:12:55.529322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.402 [2024-07-25 13:12:55.529336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.402 [2024-07-25 13:12:55.533600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.402 [2024-07-25 13:12:55.533641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.402 [2024-07-25 13:12:55.533655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.402 [2024-07-25 13:12:55.537929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.402 [2024-07-25 13:12:55.537969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.402 [2024-07-25 13:12:55.537983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.402 [2024-07-25 13:12:55.542170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.402 [2024-07-25 13:12:55.542210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.402 [2024-07-25 13:12:55.542223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.402 [2024-07-25 13:12:55.546431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.402 [2024-07-25 13:12:55.546471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.402 [2024-07-25 13:12:55.546486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.402 [2024-07-25 13:12:55.550736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.402 [2024-07-25 13:12:55.550776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.402 [2024-07-25 13:12:55.550790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.402 [2024-07-25 13:12:55.554970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.402 [2024-07-25 13:12:55.555010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.402 [2024-07-25 13:12:55.555024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.402 [2024-07-25 13:12:55.559376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.402 [2024-07-25 13:12:55.559416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.402 [2024-07-25 13:12:55.559431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.402 [2024-07-25 13:12:55.563727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.402 [2024-07-25 13:12:55.563783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.402 [2024-07-25 13:12:55.563797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.402 [2024-07-25 13:12:55.568005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.402 [2024-07-25 13:12:55.568043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.402 [2024-07-25 13:12:55.568057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.402 [2024-07-25 13:12:55.572308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.402 [2024-07-25 13:12:55.572349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.402 [2024-07-25 13:12:55.572362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.402 [2024-07-25 13:12:55.576660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.402 [2024-07-25 13:12:55.576700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.402 [2024-07-25 13:12:55.576715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.402 [2024-07-25 13:12:55.581073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.402 [2024-07-25 13:12:55.581114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.402 [2024-07-25 13:12:55.581128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.402 [2024-07-25 13:12:55.585424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.402 [2024-07-25 13:12:55.585466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.402 [2024-07-25 13:12:55.585480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.662 [2024-07-25 13:12:55.589951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.663 [2024-07-25 13:12:55.590004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.663 [2024-07-25 13:12:55.590019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.663 [2024-07-25 13:12:55.594300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.663 [2024-07-25 13:12:55.594341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.663 [2024-07-25 13:12:55.594354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.663 [2024-07-25 13:12:55.598695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.663 [2024-07-25 13:12:55.598737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.663 [2024-07-25 13:12:55.598751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.663 [2024-07-25 13:12:55.603034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.663 [2024-07-25 13:12:55.603082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.663 [2024-07-25 13:12:55.603095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.663 [2024-07-25 13:12:55.607328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.663 [2024-07-25 13:12:55.607369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.663 [2024-07-25 13:12:55.607382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.663 [2024-07-25 13:12:55.611786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.663 [2024-07-25 13:12:55.611826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.663 [2024-07-25 13:12:55.611853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.663 [2024-07-25 13:12:55.616093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.663 [2024-07-25 13:12:55.616134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.663 [2024-07-25 13:12:55.616148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.663 [2024-07-25 13:12:55.620485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.663 [2024-07-25 13:12:55.620534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.663 [2024-07-25 13:12:55.620548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.663 [2024-07-25 13:12:55.624932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.663 [2024-07-25 13:12:55.624972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.663 [2024-07-25 13:12:55.624986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.663 [2024-07-25 13:12:55.629217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.663 [2024-07-25 13:12:55.629277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.663 [2024-07-25 13:12:55.629290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.663 [2024-07-25 13:12:55.633445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.663 [2024-07-25 13:12:55.633488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.663 [2024-07-25 13:12:55.633502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.663 [2024-07-25 13:12:55.637763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.663 [2024-07-25 13:12:55.637806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.663 [2024-07-25 13:12:55.637820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.663 [2024-07-25 13:12:55.642067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.663 [2024-07-25 13:12:55.642108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.663 [2024-07-25 13:12:55.642121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.663 [2024-07-25 13:12:55.646421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.663 [2024-07-25 13:12:55.646462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.663 [2024-07-25 13:12:55.646476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.663 [2024-07-25 13:12:55.650681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.663 [2024-07-25 13:12:55.650722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.663 [2024-07-25 13:12:55.650736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.663 [2024-07-25 13:12:55.654922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.663 [2024-07-25 13:12:55.654963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.663 [2024-07-25 13:12:55.654976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.663 [2024-07-25 13:12:55.659290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.663 [2024-07-25 13:12:55.659330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.663 [2024-07-25 13:12:55.659344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.663 [2024-07-25 13:12:55.663616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.663 [2024-07-25 13:12:55.663657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.663 [2024-07-25 13:12:55.663671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.663 [2024-07-25 13:12:55.667918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.663 [2024-07-25 13:12:55.667963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.663 [2024-07-25 13:12:55.667977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.663 [2024-07-25 13:12:55.672205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.663 [2024-07-25 13:12:55.672246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.663 [2024-07-25 13:12:55.672260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.663 [2024-07-25 13:12:55.676616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.663 [2024-07-25 13:12:55.676655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.663 [2024-07-25 13:12:55.676668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.663 [2024-07-25 13:12:55.681075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.663 [2024-07-25 13:12:55.681119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.663 [2024-07-25 13:12:55.681133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.663 [2024-07-25 13:12:55.685459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.663 [2024-07-25 13:12:55.685500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.663 [2024-07-25 13:12:55.685514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.663 [2024-07-25 13:12:55.689823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.663 [2024-07-25 13:12:55.689872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.663 [2024-07-25 13:12:55.689885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.663 [2024-07-25 13:12:55.694166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.663 [2024-07-25 13:12:55.694207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.663 [2024-07-25 13:12:55.694220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.663 [2024-07-25 13:12:55.698510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.663 [2024-07-25 13:12:55.698551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.663 [2024-07-25 13:12:55.698565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.663 [2024-07-25 13:12:55.702895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.663 [2024-07-25 13:12:55.702935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.663 [2024-07-25 13:12:55.702948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.663 [2024-07-25 13:12:55.707276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.664 [2024-07-25 13:12:55.707316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.664 [2024-07-25 13:12:55.707330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.664 [2024-07-25 13:12:55.711706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.664 [2024-07-25 13:12:55.711746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.664 [2024-07-25 13:12:55.711760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.664 [2024-07-25 13:12:55.716284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.664 [2024-07-25 13:12:55.716326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.664 [2024-07-25 13:12:55.716340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.664 [2024-07-25 13:12:55.721051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.664 [2024-07-25 13:12:55.721123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.664 [2024-07-25 13:12:55.721138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.664 [2024-07-25 13:12:55.725437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.664 [2024-07-25 13:12:55.725478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.664 [2024-07-25 13:12:55.725492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.664 [2024-07-25 13:12:55.730079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.664 [2024-07-25 13:12:55.730119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.664 [2024-07-25 13:12:55.730133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.664 [2024-07-25 13:12:55.734337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.664 [2024-07-25 13:12:55.734377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.664 [2024-07-25 13:12:55.734390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.664 [2024-07-25 13:12:55.738692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.664 [2024-07-25 13:12:55.738733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.664 [2024-07-25 13:12:55.738747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.664 [2024-07-25 13:12:55.743060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.664 [2024-07-25 13:12:55.743101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.664 [2024-07-25 13:12:55.743116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.664 [2024-07-25 13:12:55.747457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.664 [2024-07-25 13:12:55.747504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.664 [2024-07-25 13:12:55.747517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.664 [2024-07-25 13:12:55.751835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.664 [2024-07-25 13:12:55.751887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.664 [2024-07-25 13:12:55.751902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.664 [2024-07-25 13:12:55.756122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.664 [2024-07-25 13:12:55.756161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.664 [2024-07-25 13:12:55.756175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.664 [2024-07-25 13:12:55.760502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.664 [2024-07-25 13:12:55.760542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.664 [2024-07-25 13:12:55.760557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.664 [2024-07-25 13:12:55.764878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.664 [2024-07-25 13:12:55.764918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.664 [2024-07-25 13:12:55.764932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.664 [2024-07-25 13:12:55.769140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.664 [2024-07-25 13:12:55.769181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.664 [2024-07-25 13:12:55.769195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.664 [2024-07-25 13:12:55.773364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.664 [2024-07-25 13:12:55.773404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.664 [2024-07-25 13:12:55.773418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.664 [2024-07-25 13:12:55.777594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.664 [2024-07-25 13:12:55.777635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.664 [2024-07-25 13:12:55.777649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.664 [2024-07-25 13:12:55.781876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.664 [2024-07-25 13:12:55.781919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.664 [2024-07-25 13:12:55.781933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.664 [2024-07-25 13:12:55.786188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.664 [2024-07-25 13:12:55.786229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.664 [2024-07-25 13:12:55.786243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.664 [2024-07-25 13:12:55.790462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.664 [2024-07-25 13:12:55.790503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.664 [2024-07-25 13:12:55.790524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.664 [2024-07-25 13:12:55.794914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.664 [2024-07-25 13:12:55.794953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.664 [2024-07-25 13:12:55.794968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.664 [2024-07-25 13:12:55.799329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.664 [2024-07-25 13:12:55.799370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.664 [2024-07-25 13:12:55.799384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.664 [2024-07-25 13:12:55.803795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.664 [2024-07-25 13:12:55.803845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.664 [2024-07-25 13:12:55.803862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.664 [2024-07-25 13:12:55.808121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.664 [2024-07-25 13:12:55.808160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.664 [2024-07-25 13:12:55.808182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.664 [2024-07-25 13:12:55.812704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.664 [2024-07-25 13:12:55.812767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.664 [2024-07-25 13:12:55.812782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.665 [2024-07-25 13:12:55.817345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.665 [2024-07-25 13:12:55.817384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.665 [2024-07-25 13:12:55.817398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.665 [2024-07-25 13:12:55.821671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.665 [2024-07-25 13:12:55.821729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.665 [2024-07-25 13:12:55.821743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.665 [2024-07-25 13:12:55.826209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.665 [2024-07-25 13:12:55.826249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.665 [2024-07-25 13:12:55.826263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.665 [2024-07-25 13:12:55.830459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.665 [2024-07-25 13:12:55.830499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.665 [2024-07-25 13:12:55.830513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.665 [2024-07-25 13:12:55.834863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.665 [2024-07-25 13:12:55.834914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.665 [2024-07-25 13:12:55.834928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.665 [2024-07-25 13:12:55.839091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.665 [2024-07-25 13:12:55.839130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.665 [2024-07-25 13:12:55.839144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.665 [2024-07-25 13:12:55.843410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.665 [2024-07-25 13:12:55.843450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.665 [2024-07-25 13:12:55.843464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.665 [2024-07-25 13:12:55.847764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.665 [2024-07-25 13:12:55.847802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.665 [2024-07-25 13:12:55.847815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.924 [2024-07-25 13:12:55.852109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.924 [2024-07-25 13:12:55.852149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.924 [2024-07-25 13:12:55.852162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.924 [2024-07-25 13:12:55.856577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.924 [2024-07-25 13:12:55.856627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.924 [2024-07-25 13:12:55.856641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.924 [2024-07-25 13:12:55.861256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.924 [2024-07-25 13:12:55.861299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.924 [2024-07-25 13:12:55.861313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.924 [2024-07-25 13:12:55.865802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.924 [2024-07-25 13:12:55.865864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.924 [2024-07-25 13:12:55.865877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.924 [2024-07-25 13:12:55.870433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.924 [2024-07-25 13:12:55.870473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.924 [2024-07-25 13:12:55.870487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.924 [2024-07-25 13:12:55.875041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.924 [2024-07-25 13:12:55.875129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.924 [2024-07-25 13:12:55.875142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.924 [2024-07-25 13:12:55.879682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.924 [2024-07-25 13:12:55.879735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.924 [2024-07-25 13:12:55.879765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.924 [2024-07-25 13:12:55.884150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.924 [2024-07-25 13:12:55.884221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.924 [2024-07-25 13:12:55.884235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.924 [2024-07-25 13:12:55.889154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.924 [2024-07-25 13:12:55.889232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.924 [2024-07-25 13:12:55.889245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.924 [2024-07-25 13:12:55.893653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.924 [2024-07-25 13:12:55.893706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.924 [2024-07-25 13:12:55.893735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.924 [2024-07-25 13:12:55.898135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.924 [2024-07-25 13:12:55.898188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.924 [2024-07-25 13:12:55.898219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.924 [2024-07-25 13:12:55.902707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.924 [2024-07-25 13:12:55.902773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.924 [2024-07-25 13:12:55.902801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.924 [2024-07-25 13:12:55.907408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.924 [2024-07-25 13:12:55.907447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.924 [2024-07-25 13:12:55.907462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.924 [2024-07-25 13:12:55.911874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.924 [2024-07-25 13:12:55.911924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.924 [2024-07-25 13:12:55.911938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.924 [2024-07-25 13:12:55.916242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.924 [2024-07-25 13:12:55.916282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.924 [2024-07-25 13:12:55.916295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.924 [2024-07-25 13:12:55.920790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.925 [2024-07-25 13:12:55.920842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.925 [2024-07-25 13:12:55.920858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.925 [2024-07-25 13:12:55.925305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.925 [2024-07-25 13:12:55.925343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.925 [2024-07-25 13:12:55.925357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.925 [2024-07-25 13:12:55.929798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.925 [2024-07-25 13:12:55.929881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.925 [2024-07-25 13:12:55.929896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.925 [2024-07-25 13:12:55.934284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.925 [2024-07-25 13:12:55.934324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.925 [2024-07-25 13:12:55.934337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.925 [2024-07-25 13:12:55.938782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.925 [2024-07-25 13:12:55.938893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.925 [2024-07-25 13:12:55.938908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.925 [2024-07-25 13:12:55.943397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.925 [2024-07-25 13:12:55.943438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.925 [2024-07-25 13:12:55.943451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.925 [2024-07-25 13:12:55.947965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.925 [2024-07-25 13:12:55.948016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.925 [2024-07-25 13:12:55.948045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.925 [2024-07-25 13:12:55.952783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.925 [2024-07-25 13:12:55.952824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.925 [2024-07-25 13:12:55.952851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.925 [2024-07-25 13:12:55.957358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.925 [2024-07-25 13:12:55.957440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.925 [2024-07-25 13:12:55.957469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.925 [2024-07-25 13:12:55.962170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.925 [2024-07-25 13:12:55.962211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.925 [2024-07-25 13:12:55.962224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.925 [2024-07-25 13:12:55.966612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.925 [2024-07-25 13:12:55.966664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.925 [2024-07-25 13:12:55.966693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.925 [2024-07-25 13:12:55.971051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.925 [2024-07-25 13:12:55.971139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.925 [2024-07-25 13:12:55.971152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.925 [2024-07-25 13:12:55.975597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.925 [2024-07-25 13:12:55.975649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.925 [2024-07-25 13:12:55.975679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.925 [2024-07-25 13:12:55.980117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.925 [2024-07-25 13:12:55.980156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.925 [2024-07-25 13:12:55.980169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.925 [2024-07-25 13:12:55.984680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.925 [2024-07-25 13:12:55.984745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.925 [2024-07-25 13:12:55.984760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.925 [2024-07-25 13:12:55.989150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.925 [2024-07-25 13:12:55.989191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.925 [2024-07-25 13:12:55.989204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.925 [2024-07-25 13:12:55.993675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.925 [2024-07-25 13:12:55.993727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.925 [2024-07-25 13:12:55.993755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.925 [2024-07-25 13:12:55.998006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.925 [2024-07-25 13:12:55.998073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.925 [2024-07-25 13:12:55.998102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.925 [2024-07-25 13:12:56.002647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.925 [2024-07-25 13:12:56.002699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.925 [2024-07-25 13:12:56.002727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.925 [2024-07-25 13:12:56.007255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.925 [2024-07-25 13:12:56.007294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.925 [2024-07-25 13:12:56.007308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.925 [2024-07-25 13:12:56.011793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.925 [2024-07-25 13:12:56.011871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.925 [2024-07-25 13:12:56.011886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.925 [2024-07-25 13:12:56.016250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.925 [2024-07-25 13:12:56.016289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.925 [2024-07-25 13:12:56.016303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.925 [2024-07-25 13:12:56.020957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.925 [2024-07-25 13:12:56.021004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.925 [2024-07-25 13:12:56.021017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.925 [2024-07-25 13:12:56.025440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.925 [2024-07-25 13:12:56.025481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.925 [2024-07-25 13:12:56.025495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.925 [2024-07-25 13:12:56.029838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.925 [2024-07-25 13:12:56.029889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.925 [2024-07-25 13:12:56.029903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.925 [2024-07-25 13:12:56.034355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.925 [2024-07-25 13:12:56.034396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.925 [2024-07-25 13:12:56.034410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.925 [2024-07-25 13:12:56.038999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.925 [2024-07-25 13:12:56.039051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.925 [2024-07-25 13:12:56.039096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.926 [2024-07-25 13:12:56.043562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.926 [2024-07-25 13:12:56.043631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.926 [2024-07-25 13:12:56.043660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.926 [2024-07-25 13:12:56.048021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.926 [2024-07-25 13:12:56.048056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.926 [2024-07-25 13:12:56.048101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.926 [2024-07-25 13:12:56.052514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.926 [2024-07-25 13:12:56.052615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.926 [2024-07-25 13:12:56.052644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.926 [2024-07-25 13:12:56.057185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.926 [2024-07-25 13:12:56.057224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.926 [2024-07-25 13:12:56.057239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.926 [2024-07-25 13:12:56.061827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.926 [2024-07-25 13:12:56.061902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.926 [2024-07-25 13:12:56.061915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.926 [2024-07-25 13:12:56.066440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.926 [2024-07-25 13:12:56.066480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.926 [2024-07-25 13:12:56.066494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.926 [2024-07-25 13:12:56.071001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.926 [2024-07-25 13:12:56.071065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.926 [2024-07-25 13:12:56.071094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.926 [2024-07-25 13:12:56.075534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.926 [2024-07-25 13:12:56.075637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.926 [2024-07-25 13:12:56.075666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.926 [2024-07-25 13:12:56.080087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.926 [2024-07-25 13:12:56.080126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.926 [2024-07-25 13:12:56.080140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.926 [2024-07-25 13:12:56.084614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.926 [2024-07-25 13:12:56.084669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.926 [2024-07-25 13:12:56.084683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.926 [2024-07-25 13:12:56.089145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.926 [2024-07-25 13:12:56.089186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.926 [2024-07-25 13:12:56.089200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.926 [2024-07-25 13:12:56.093838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.926 [2024-07-25 13:12:56.093903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.926 [2024-07-25 13:12:56.093917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.926 [2024-07-25 13:12:56.098459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.926 [2024-07-25 13:12:56.098531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.926 [2024-07-25 13:12:56.098580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:03.926 [2024-07-25 13:12:56.102925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.926 [2024-07-25 13:12:56.102990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.926 [2024-07-25 13:12:56.103003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:03.926 [2024-07-25 13:12:56.107603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.926 [2024-07-25 13:12:56.107658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.926 [2024-07-25 13:12:56.107687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:03.926 [2024-07-25 13:12:56.112119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:03.926 [2024-07-25 13:12:56.112173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.926 [2024-07-25 13:12:56.112203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.186 [2024-07-25 13:12:56.116781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.186 [2024-07-25 13:12:56.116822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.186 [2024-07-25 13:12:56.116848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.186 [2024-07-25 13:12:56.121412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.186 [2024-07-25 13:12:56.121465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.186 [2024-07-25 13:12:56.121494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.186 [2024-07-25 13:12:56.125853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.186 [2024-07-25 13:12:56.125913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.186 [2024-07-25 13:12:56.125941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.186 [2024-07-25 13:12:56.130432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.186 [2024-07-25 13:12:56.130520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.186 [2024-07-25 13:12:56.130533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.186 [2024-07-25 13:12:56.134978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.186 [2024-07-25 13:12:56.135030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.186 [2024-07-25 13:12:56.135060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.186 [2024-07-25 13:12:56.139663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.186 [2024-07-25 13:12:56.139716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.186 [2024-07-25 13:12:56.139745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.186 [2024-07-25 13:12:56.144270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.186 [2024-07-25 13:12:56.144310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.186 [2024-07-25 13:12:56.144323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.186 [2024-07-25 13:12:56.148901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.186 [2024-07-25 13:12:56.148941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.186 [2024-07-25 13:12:56.148955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.186 [2024-07-25 13:12:56.153422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.186 [2024-07-25 13:12:56.153463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.186 [2024-07-25 13:12:56.153477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.186 [2024-07-25 13:12:56.158170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.186 [2024-07-25 13:12:56.158250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.186 [2024-07-25 13:12:56.158264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.186 [2024-07-25 13:12:56.162944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.186 [2024-07-25 13:12:56.163005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.186 [2024-07-25 13:12:56.163034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.186 [2024-07-25 13:12:56.167503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.186 [2024-07-25 13:12:56.167557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.186 [2024-07-25 13:12:56.167589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.186 [2024-07-25 13:12:56.172263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.186 [2024-07-25 13:12:56.172303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.186 [2024-07-25 13:12:56.172316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.186 [2024-07-25 13:12:56.176921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.186 [2024-07-25 13:12:56.176962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.186 [2024-07-25 13:12:56.176976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.187 [2024-07-25 13:12:56.181267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.187 [2024-07-25 13:12:56.181306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.187 [2024-07-25 13:12:56.181320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.187 [2024-07-25 13:12:56.185706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.187 [2024-07-25 13:12:56.185761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.187 [2024-07-25 13:12:56.185791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.187 [2024-07-25 13:12:56.190325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.187 [2024-07-25 13:12:56.190373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.187 [2024-07-25 13:12:56.190387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.187 [2024-07-25 13:12:56.194925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.187 [2024-07-25 13:12:56.194990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.187 [2024-07-25 13:12:56.195005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.187 [2024-07-25 13:12:56.199537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.187 [2024-07-25 13:12:56.199623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.187 [2024-07-25 13:12:56.199653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.187 [2024-07-25 13:12:56.203961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.187 [2024-07-25 13:12:56.204017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.187 [2024-07-25 13:12:56.204031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.187 [2024-07-25 13:12:56.208267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.187 [2024-07-25 13:12:56.208307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.187 [2024-07-25 13:12:56.208321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.187 [2024-07-25 13:12:56.212672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.187 [2024-07-25 13:12:56.212745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.187 [2024-07-25 13:12:56.212776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.187 [2024-07-25 13:12:56.217199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.187 [2024-07-25 13:12:56.217253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.187 [2024-07-25 13:12:56.217267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.187 [2024-07-25 13:12:56.221812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.187 [2024-07-25 13:12:56.221874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.187 [2024-07-25 13:12:56.221903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.187 [2024-07-25 13:12:56.226353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.187 [2024-07-25 13:12:56.226394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.187 [2024-07-25 13:12:56.226418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.187 [2024-07-25 13:12:56.231002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.187 [2024-07-25 13:12:56.231054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.187 [2024-07-25 13:12:56.231083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.187 [2024-07-25 13:12:56.235605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.187 [2024-07-25 13:12:56.235659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.187 [2024-07-25 13:12:56.235687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.187 [2024-07-25 13:12:56.240075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.187 [2024-07-25 13:12:56.240144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.187 [2024-07-25 13:12:56.240174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.187 [2024-07-25 13:12:56.244515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.187 [2024-07-25 13:12:56.244585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.187 [2024-07-25 13:12:56.244614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.187 [2024-07-25 13:12:56.249115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.187 [2024-07-25 13:12:56.249169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.187 [2024-07-25 13:12:56.249182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.187 [2024-07-25 13:12:56.253731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.187 [2024-07-25 13:12:56.253785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.187 [2024-07-25 13:12:56.253816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.187 [2024-07-25 13:12:56.258330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.187 [2024-07-25 13:12:56.258367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.187 [2024-07-25 13:12:56.258397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.187 [2024-07-25 13:12:56.262926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.187 [2024-07-25 13:12:56.262988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.187 [2024-07-25 13:12:56.263017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.187 [2024-07-25 13:12:56.267163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.187 [2024-07-25 13:12:56.267230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.187 [2024-07-25 13:12:56.267260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.187 [2024-07-25 13:12:56.271611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.187 [2024-07-25 13:12:56.271663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.187 [2024-07-25 13:12:56.271692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.187 [2024-07-25 13:12:56.275985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.187 [2024-07-25 13:12:56.276037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.187 [2024-07-25 13:12:56.276065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.187 [2024-07-25 13:12:56.280517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.187 [2024-07-25 13:12:56.280574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.187 [2024-07-25 13:12:56.280587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.187 [2024-07-25 13:12:56.285010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.187 [2024-07-25 13:12:56.285050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.187 [2024-07-25 13:12:56.285078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.187 [2024-07-25 13:12:56.289414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.187 [2024-07-25 13:12:56.289468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.187 [2024-07-25 13:12:56.289497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.187 [2024-07-25 13:12:56.294182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.187 [2024-07-25 13:12:56.294268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.187 [2024-07-25 13:12:56.294298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.187 [2024-07-25 13:12:56.298875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.187 [2024-07-25 13:12:56.298923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.188 [2024-07-25 13:12:56.298937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.188 [2024-07-25 13:12:56.303402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.188 [2024-07-25 13:12:56.303443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.188 [2024-07-25 13:12:56.303458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.188 [2024-07-25 13:12:56.307822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.188 [2024-07-25 13:12:56.307885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.188 [2024-07-25 13:12:56.307899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.188 [2024-07-25 13:12:56.312319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.188 [2024-07-25 13:12:56.312374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.188 [2024-07-25 13:12:56.312387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.188 [2024-07-25 13:12:56.316860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.188 [2024-07-25 13:12:56.316896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.188 [2024-07-25 13:12:56.316909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.188 [2024-07-25 13:12:56.321447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.188 [2024-07-25 13:12:56.321500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.188 [2024-07-25 13:12:56.321530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.188 [2024-07-25 13:12:56.325965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.188 [2024-07-25 13:12:56.326001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.188 [2024-07-25 13:12:56.326031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.188 [2024-07-25 13:12:56.330599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.188 [2024-07-25 13:12:56.330652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.188 [2024-07-25 13:12:56.330681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.188 [2024-07-25 13:12:56.335231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.188 [2024-07-25 13:12:56.335271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.188 [2024-07-25 13:12:56.335284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.188 [2024-07-25 13:12:56.339861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.188 [2024-07-25 13:12:56.339922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.188 [2024-07-25 13:12:56.339951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.188 [2024-07-25 13:12:56.344268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.188 [2024-07-25 13:12:56.344312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.188 [2024-07-25 13:12:56.344326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.188 [2024-07-25 13:12:56.348934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.188 [2024-07-25 13:12:56.348974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.188 [2024-07-25 13:12:56.348987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.188 [2024-07-25 13:12:56.353546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.188 [2024-07-25 13:12:56.353590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.188 [2024-07-25 13:12:56.353604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.188 [2024-07-25 13:12:56.358032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.188 [2024-07-25 13:12:56.358084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.188 [2024-07-25 13:12:56.358113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.188 [2024-07-25 13:12:56.362497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.188 [2024-07-25 13:12:56.362569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.188 [2024-07-25 13:12:56.362610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.188 [2024-07-25 13:12:56.367216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.188 [2024-07-25 13:12:56.367256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.188 [2024-07-25 13:12:56.367270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.188 [2024-07-25 13:12:56.371835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.188 [2024-07-25 13:12:56.371898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.188 [2024-07-25 13:12:56.371928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.448 [2024-07-25 13:12:56.376368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.448 [2024-07-25 13:12:56.376409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.448 [2024-07-25 13:12:56.376422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.448 [2024-07-25 13:12:56.380826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.448 [2024-07-25 13:12:56.380872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.448 [2024-07-25 13:12:56.380886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.448 [2024-07-25 13:12:56.385312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.448 [2024-07-25 13:12:56.385351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.448 [2024-07-25 13:12:56.385364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.448 [2024-07-25 13:12:56.389708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.448 [2024-07-25 13:12:56.389758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.448 [2024-07-25 13:12:56.389786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.448 [2024-07-25 13:12:56.394329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.448 [2024-07-25 13:12:56.394368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.448 [2024-07-25 13:12:56.394382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.448 [2024-07-25 13:12:56.398749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.448 [2024-07-25 13:12:56.398789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.448 [2024-07-25 13:12:56.398804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.448 [2024-07-25 13:12:56.402989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.448 [2024-07-25 13:12:56.403044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.448 [2024-07-25 13:12:56.403058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.448 [2024-07-25 13:12:56.407370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.448 [2024-07-25 13:12:56.407414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.448 [2024-07-25 13:12:56.407428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.448 [2024-07-25 13:12:56.411963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.448 [2024-07-25 13:12:56.412002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.448 [2024-07-25 13:12:56.412015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.448 [2024-07-25 13:12:56.416271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.448 [2024-07-25 13:12:56.416310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.448 [2024-07-25 13:12:56.416325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.448 [2024-07-25 13:12:56.420555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.448 [2024-07-25 13:12:56.420607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.448 [2024-07-25 13:12:56.420622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.448 [2024-07-25 13:12:56.424892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.448 [2024-07-25 13:12:56.424933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.449 [2024-07-25 13:12:56.424946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.449 [2024-07-25 13:12:56.429388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.449 [2024-07-25 13:12:56.429427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.449 [2024-07-25 13:12:56.429441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.449 [2024-07-25 13:12:56.433932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.449 [2024-07-25 13:12:56.433970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.449 [2024-07-25 13:12:56.433984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.449 [2024-07-25 13:12:56.438240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.449 [2024-07-25 13:12:56.438279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.449 [2024-07-25 13:12:56.438302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.449 [2024-07-25 13:12:56.442579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.449 [2024-07-25 13:12:56.442619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.449 [2024-07-25 13:12:56.442632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.449 [2024-07-25 13:12:56.446843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.449 [2024-07-25 13:12:56.446885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.449 [2024-07-25 13:12:56.446899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.449 [2024-07-25 13:12:56.451059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.449 [2024-07-25 13:12:56.451097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.449 [2024-07-25 13:12:56.451111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.449 [2024-07-25 13:12:56.455350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.449 [2024-07-25 13:12:56.455390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.449 [2024-07-25 13:12:56.455404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.449 [2024-07-25 13:12:56.459673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.449 [2024-07-25 13:12:56.459713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.449 [2024-07-25 13:12:56.459727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.449 [2024-07-25 13:12:56.464107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.449 [2024-07-25 13:12:56.464147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.449 [2024-07-25 13:12:56.464161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.449 [2024-07-25 13:12:56.468462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.449 [2024-07-25 13:12:56.468502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.449 [2024-07-25 13:12:56.468515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.449 [2024-07-25 13:12:56.472712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.449 [2024-07-25 13:12:56.472771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.449 [2024-07-25 13:12:56.472785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.449 [2024-07-25 13:12:56.477050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.449 [2024-07-25 13:12:56.477091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.449 [2024-07-25 13:12:56.477104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.449 [2024-07-25 13:12:56.481405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.449 [2024-07-25 13:12:56.481445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.449 [2024-07-25 13:12:56.481459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.449 [2024-07-25 13:12:56.485746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.449 [2024-07-25 13:12:56.485786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.449 [2024-07-25 13:12:56.485800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.449 [2024-07-25 13:12:56.490206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.449 [2024-07-25 13:12:56.490247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.449 [2024-07-25 13:12:56.490261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.449 [2024-07-25 13:12:56.494642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.449 [2024-07-25 13:12:56.494682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.449 [2024-07-25 13:12:56.494696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.449 [2024-07-25 13:12:56.499133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.449 [2024-07-25 13:12:56.499182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.449 [2024-07-25 13:12:56.499195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.449 [2024-07-25 13:12:56.503643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.449 [2024-07-25 13:12:56.503697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.450 [2024-07-25 13:12:56.503727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.450 [2024-07-25 13:12:56.508250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.450 [2024-07-25 13:12:56.508290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.450 [2024-07-25 13:12:56.508303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.450 [2024-07-25 13:12:56.512870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.450 [2024-07-25 13:12:56.512908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.450 [2024-07-25 13:12:56.512921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.450 [2024-07-25 13:12:56.517364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.450 [2024-07-25 13:12:56.517403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.450 [2024-07-25 13:12:56.517417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.450 [2024-07-25 13:12:56.521767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.450 [2024-07-25 13:12:56.521824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.450 [2024-07-25 13:12:56.521853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.450 [2024-07-25 13:12:56.526260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.450 [2024-07-25 13:12:56.526300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.450 [2024-07-25 13:12:56.526314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.450 [2024-07-25 13:12:56.530783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.450 [2024-07-25 13:12:56.530860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.450 [2024-07-25 13:12:56.530875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.450 [2024-07-25 13:12:56.535395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.450 [2024-07-25 13:12:56.535435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.450 [2024-07-25 13:12:56.535449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.450 [2024-07-25 13:12:56.540287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.450 [2024-07-25 13:12:56.540335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.450 [2024-07-25 13:12:56.540349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.450 [2024-07-25 13:12:56.544785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.450 [2024-07-25 13:12:56.544825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.450 [2024-07-25 13:12:56.544851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.450 [2024-07-25 13:12:56.549161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.450 [2024-07-25 13:12:56.549201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.450 [2024-07-25 13:12:56.549215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.450 [2024-07-25 13:12:56.553563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.450 [2024-07-25 13:12:56.553637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.450 [2024-07-25 13:12:56.553651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.450 [2024-07-25 13:12:56.557868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.450 [2024-07-25 13:12:56.557908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.450 [2024-07-25 13:12:56.557921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.450 [2024-07-25 13:12:56.562186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.450 [2024-07-25 13:12:56.562240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.450 [2024-07-25 13:12:56.562253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.450 [2024-07-25 13:12:56.566685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.450 [2024-07-25 13:12:56.566727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.450 [2024-07-25 13:12:56.566741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.450 [2024-07-25 13:12:56.571042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.450 [2024-07-25 13:12:56.571097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.450 [2024-07-25 13:12:56.571111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.450 [2024-07-25 13:12:56.575565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.450 [2024-07-25 13:12:56.575620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.450 [2024-07-25 13:12:56.575651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.450 [2024-07-25 13:12:56.580083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.450 [2024-07-25 13:12:56.580138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.450 [2024-07-25 13:12:56.580151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.450 [2024-07-25 13:12:56.584668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.450 [2024-07-25 13:12:56.584748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.450 [2024-07-25 13:12:56.584762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.450 [2024-07-25 13:12:56.589032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.450 [2024-07-25 13:12:56.589118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.450 [2024-07-25 13:12:56.589131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.451 [2024-07-25 13:12:56.593394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.451 [2024-07-25 13:12:56.593434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.451 [2024-07-25 13:12:56.593447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.451 [2024-07-25 13:12:56.597618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.451 [2024-07-25 13:12:56.597673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.451 [2024-07-25 13:12:56.597687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.451 [2024-07-25 13:12:56.602017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.451 [2024-07-25 13:12:56.602056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.451 [2024-07-25 13:12:56.602069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.451 [2024-07-25 13:12:56.606528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.451 [2024-07-25 13:12:56.606569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.451 [2024-07-25 13:12:56.606583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.451 [2024-07-25 13:12:56.610885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.451 [2024-07-25 13:12:56.610923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.451 [2024-07-25 13:12:56.610937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.451 [2024-07-25 13:12:56.615152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.451 [2024-07-25 13:12:56.615193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.451 [2024-07-25 13:12:56.615207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.451 [2024-07-25 13:12:56.619560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.451 [2024-07-25 13:12:56.619600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.451 [2024-07-25 13:12:56.619614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.451 [2024-07-25 13:12:56.623804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.451 [2024-07-25 13:12:56.623857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.451 [2024-07-25 13:12:56.623871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.451 [2024-07-25 13:12:56.628107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.451 [2024-07-25 13:12:56.628146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.451 [2024-07-25 13:12:56.628161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.451 [2024-07-25 13:12:56.632562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.451 [2024-07-25 13:12:56.632658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.451 [2024-07-25 13:12:56.632671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.713 [2024-07-25 13:12:56.637088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.713 [2024-07-25 13:12:56.637161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.713 [2024-07-25 13:12:56.637175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.713 [2024-07-25 13:12:56.641492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.713 [2024-07-25 13:12:56.641534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.713 [2024-07-25 13:12:56.641547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.713 [2024-07-25 13:12:56.645976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.713 [2024-07-25 13:12:56.646015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.713 [2024-07-25 13:12:56.646030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.713 [2024-07-25 13:12:56.650318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.713 [2024-07-25 13:12:56.650360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.713 [2024-07-25 13:12:56.650374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.713 [2024-07-25 13:12:56.654861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.713 [2024-07-25 13:12:56.654926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.713 [2024-07-25 13:12:56.654940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.713 [2024-07-25 13:12:56.659416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.713 [2024-07-25 13:12:56.659455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.713 [2024-07-25 13:12:56.659469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.713 [2024-07-25 13:12:56.663780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.713 [2024-07-25 13:12:56.663859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.713 [2024-07-25 13:12:56.663873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.713 [2024-07-25 13:12:56.668271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.713 [2024-07-25 13:12:56.668310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.713 [2024-07-25 13:12:56.668324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.713 [2024-07-25 13:12:56.672864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.713 [2024-07-25 13:12:56.672904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.713 [2024-07-25 13:12:56.672917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.713 [2024-07-25 13:12:56.677695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.713 [2024-07-25 13:12:56.677767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.713 [2024-07-25 13:12:56.677780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.713 [2024-07-25 13:12:56.682059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.713 [2024-07-25 13:12:56.682111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.713 [2024-07-25 13:12:56.682123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.713 [2024-07-25 13:12:56.686471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.713 [2024-07-25 13:12:56.686540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.713 [2024-07-25 13:12:56.686569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.713 [2024-07-25 13:12:56.691346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.713 [2024-07-25 13:12:56.691385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.713 [2024-07-25 13:12:56.691399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.713 [2024-07-25 13:12:56.695809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.713 [2024-07-25 13:12:56.695858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.713 [2024-07-25 13:12:56.695872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.713 [2024-07-25 13:12:56.700492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.713 [2024-07-25 13:12:56.700532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.713 [2024-07-25 13:12:56.700550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.713 [2024-07-25 13:12:56.705414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.713 [2024-07-25 13:12:56.705454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.713 [2024-07-25 13:12:56.705467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.713 [2024-07-25 13:12:56.710093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.713 [2024-07-25 13:12:56.710149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.714 [2024-07-25 13:12:56.710163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.714 [2024-07-25 13:12:56.714584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.714 [2024-07-25 13:12:56.714626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.714 [2024-07-25 13:12:56.714639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.714 [2024-07-25 13:12:56.718983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.714 [2024-07-25 13:12:56.719026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.714 [2024-07-25 13:12:56.719039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.714 [2024-07-25 13:12:56.723486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.714 [2024-07-25 13:12:56.723529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.714 [2024-07-25 13:12:56.723542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.714 [2024-07-25 13:12:56.728080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.714 [2024-07-25 13:12:56.728135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.714 [2024-07-25 13:12:56.728149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.714 [2024-07-25 13:12:56.732681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.714 [2024-07-25 13:12:56.732760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.714 [2024-07-25 13:12:56.732775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.714 [2024-07-25 13:12:56.737244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.714 [2024-07-25 13:12:56.737284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.714 [2024-07-25 13:12:56.737298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.714 [2024-07-25 13:12:56.741818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.714 [2024-07-25 13:12:56.741879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.714 [2024-07-25 13:12:56.741908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.714 [2024-07-25 13:12:56.746457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.714 [2024-07-25 13:12:56.746498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.714 [2024-07-25 13:12:56.746512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.714 [2024-07-25 13:12:56.750908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.714 [2024-07-25 13:12:56.750968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.714 [2024-07-25 13:12:56.751014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.714 [2024-07-25 13:12:56.755296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.714 [2024-07-25 13:12:56.755335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.714 [2024-07-25 13:12:56.755350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.714 [2024-07-25 13:12:56.759826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.714 [2024-07-25 13:12:56.759887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.714 [2024-07-25 13:12:56.759912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.714 [2024-07-25 13:12:56.764322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.714 [2024-07-25 13:12:56.764375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.714 [2024-07-25 13:12:56.764420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.714 [2024-07-25 13:12:56.769033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.714 [2024-07-25 13:12:56.769104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.714 [2024-07-25 13:12:56.769118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.714 [2024-07-25 13:12:56.773480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.714 [2024-07-25 13:12:56.773521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.714 [2024-07-25 13:12:56.773535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.714 [2024-07-25 13:12:56.778227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.714 [2024-07-25 13:12:56.778269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.714 [2024-07-25 13:12:56.778282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.714 [2024-07-25 13:12:56.782755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.714 [2024-07-25 13:12:56.782810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.714 [2024-07-25 13:12:56.782823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.714 [2024-07-25 13:12:56.787522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.714 [2024-07-25 13:12:56.787594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.714 [2024-07-25 13:12:56.787608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.714 [2024-07-25 13:12:56.791862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.714 [2024-07-25 13:12:56.791927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.714 [2024-07-25 13:12:56.791941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.714 [2024-07-25 13:12:56.796400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.714 [2024-07-25 13:12:56.796438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.714 [2024-07-25 13:12:56.796452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.714 [2024-07-25 13:12:56.800943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.715 [2024-07-25 13:12:56.800986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.715 [2024-07-25 13:12:56.801000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.715 [2024-07-25 13:12:56.805501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.715 [2024-07-25 13:12:56.805542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.715 [2024-07-25 13:12:56.805556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.715 [2024-07-25 13:12:56.809955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.715 [2024-07-25 13:12:56.810007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.715 [2024-07-25 13:12:56.810037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.715 [2024-07-25 13:12:56.814156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.715 [2024-07-25 13:12:56.814224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.715 [2024-07-25 13:12:56.814238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.715 [2024-07-25 13:12:56.818563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.715 [2024-07-25 13:12:56.818635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.715 [2024-07-25 13:12:56.818665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.715 [2024-07-25 13:12:56.823055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.715 [2024-07-25 13:12:56.823126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.715 [2024-07-25 13:12:56.823155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.715 [2024-07-25 13:12:56.827693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.715 [2024-07-25 13:12:56.827744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.715 [2024-07-25 13:12:56.827772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.715 [2024-07-25 13:12:56.832231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.715 [2024-07-25 13:12:56.832269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.715 [2024-07-25 13:12:56.832282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.715 [2024-07-25 13:12:56.836584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.715 [2024-07-25 13:12:56.836636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.715 [2024-07-25 13:12:56.836665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.715 [2024-07-25 13:12:56.841285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.715 [2024-07-25 13:12:56.841339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.715 [2024-07-25 13:12:56.841369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:04.715 [2024-07-25 13:12:56.845845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.715 [2024-07-25 13:12:56.845913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.715 [2024-07-25 13:12:56.845944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:04.715 [2024-07-25 13:12:56.850242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148d200) 00:17:04.715 [2024-07-25 13:12:56.850296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.715 [2024-07-25 13:12:56.850310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.715 00:17:04.715 Latency(us) 00:17:04.715 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.715 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:04.715 nvme0n1 : 2.00 6874.76 859.34 0.00 0.00 2323.35 1846.92 7596.22 00:17:04.715 =================================================================================================================== 00:17:04.715 Total : 6874.76 859.34 0.00 0.00 2323.35 1846.92 7596.22 00:17:04.715 0 00:17:04.715 13:12:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:04.715 13:12:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:04.715 13:12:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:04.715 13:12:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:04.715 | .driver_specific 00:17:04.715 | .nvme_error 00:17:04.715 | .status_code 00:17:04.715 | .command_transient_transport_error' 00:17:04.996 13:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 444 > 0 )) 00:17:04.996 13:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79478 00:17:04.996 13:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79478 ']' 00:17:04.996 13:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79478 00:17:04.996 13:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:17:04.996 13:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:04.996 13:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79478 00:17:05.263 13:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:05.263 13:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:05.263 killing process with pid 79478 00:17:05.263 13:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79478' 00:17:05.263 Received shutdown signal, test time was about 2.000000 seconds 00:17:05.263 00:17:05.263 Latency(us) 00:17:05.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.263 =================================================================================================================== 00:17:05.263 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:05.263 13:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79478 00:17:05.263 13:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79478 00:17:05.263 13:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:17:05.263 13:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:05.263 13:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:05.263 13:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:05.263 13:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:05.263 13:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79538 00:17:05.263 13:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:17:05.263 13:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79538 /var/tmp/bperf.sock 00:17:05.263 13:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79538 ']' 00:17:05.263 13:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:05.263 13:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:05.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:05.263 13:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:05.263 13:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:05.263 13:12:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:05.521 [2024-07-25 13:12:57.465412] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:05.521 [2024-07-25 13:12:57.465486] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79538 ] 00:17:05.521 [2024-07-25 13:12:57.598537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.780 [2024-07-25 13:12:57.715767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.780 [2024-07-25 13:12:57.772967] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:06.347 13:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:06.347 13:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:17:06.347 13:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:06.347 13:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:06.605 13:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:06.605 13:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.605 13:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:06.605 13:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.605 13:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:06.605 13:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:06.863 nvme0n1 00:17:06.863 13:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:06.863 13:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.863 13:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:06.863 13:12:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.863 13:12:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:06.864 13:12:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:07.122 Running I/O for 2 seconds... 00:17:07.122 [2024-07-25 13:12:59.111887] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190fef90 00:17:07.122 [2024-07-25 13:12:59.113178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.122 [2024-07-25 13:12:59.113236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:07.122 [2024-07-25 13:12:59.130994] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190fe720 00:17:07.122 [2024-07-25 13:12:59.133279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.122 [2024-07-25 13:12:59.133332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.122 [2024-07-25 13:12:59.144749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190fef90 00:17:07.122 [2024-07-25 13:12:59.147051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.122 [2024-07-25 13:12:59.147101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:07.122 [2024-07-25 13:12:59.158650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190fe2e8 00:17:07.122 [2024-07-25 13:12:59.161079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.122 [2024-07-25 13:12:59.161130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:07.122 [2024-07-25 13:12:59.173777] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190fda78 00:17:07.122 [2024-07-25 13:12:59.176287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.122 [2024-07-25 13:12:59.176368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:07.122 [2024-07-25 13:12:59.188987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190fd208 00:17:07.122 [2024-07-25 13:12:59.191333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.122 [2024-07-25 13:12:59.191380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:07.122 [2024-07-25 13:12:59.204132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190fc998 00:17:07.122 [2024-07-25 13:12:59.206372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.122 [2024-07-25 13:12:59.206421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:07.122 [2024-07-25 13:12:59.218310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190fc128 00:17:07.122 [2024-07-25 13:12:59.220551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.122 [2024-07-25 13:12:59.220598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:07.122 [2024-07-25 13:12:59.232544] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190fb8b8 00:17:07.122 [2024-07-25 13:12:59.234755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.122 [2024-07-25 13:12:59.234801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:07.122 [2024-07-25 13:12:59.246437] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190fb048 00:17:07.122 [2024-07-25 13:12:59.248681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.122 [2024-07-25 13:12:59.248749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:07.122 [2024-07-25 13:12:59.260549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190fa7d8 00:17:07.122 [2024-07-25 13:12:59.262843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.122 [2024-07-25 13:12:59.262897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:07.122 [2024-07-25 13:12:59.274636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f9f68 00:17:07.122 [2024-07-25 13:12:59.276797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.122 [2024-07-25 13:12:59.276839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:07.122 [2024-07-25 13:12:59.288652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f96f8 00:17:07.122 [2024-07-25 13:12:59.290801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.122 [2024-07-25 13:12:59.290871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:07.122 [2024-07-25 13:12:59.303202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f8e88 00:17:07.122 [2024-07-25 13:12:59.305458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.122 [2024-07-25 13:12:59.305524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:07.381 [2024-07-25 13:12:59.318058] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f8618 00:17:07.381 [2024-07-25 13:12:59.320138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.381 [2024-07-25 13:12:59.320186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:07.381 [2024-07-25 13:12:59.332138] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f7da8 00:17:07.381 [2024-07-25 13:12:59.334451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.381 [2024-07-25 13:12:59.334503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:07.381 [2024-07-25 13:12:59.346404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f7538 00:17:07.381 [2024-07-25 13:12:59.348417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.381 [2024-07-25 13:12:59.348463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:07.381 [2024-07-25 13:12:59.360100] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f6cc8 00:17:07.381 [2024-07-25 13:12:59.362189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.381 [2024-07-25 13:12:59.362238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.381 [2024-07-25 13:12:59.373816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f6458 00:17:07.381 [2024-07-25 13:12:59.375760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.381 [2024-07-25 13:12:59.375805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:07.381 [2024-07-25 13:12:59.387296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f5be8 00:17:07.381 [2024-07-25 13:12:59.389317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.381 [2024-07-25 13:12:59.389363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:07.381 [2024-07-25 13:12:59.400825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f5378 00:17:07.381 [2024-07-25 13:12:59.402751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.381 [2024-07-25 13:12:59.402796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:07.381 [2024-07-25 13:12:59.414126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f4b08 00:17:07.381 [2024-07-25 13:12:59.415996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.381 [2024-07-25 13:12:59.416041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:07.381 [2024-07-25 13:12:59.427511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f4298 00:17:07.381 [2024-07-25 13:12:59.429537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.381 [2024-07-25 13:12:59.429600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:07.381 [2024-07-25 13:12:59.441788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f3a28 00:17:07.381 [2024-07-25 13:12:59.443931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.381 [2024-07-25 13:12:59.443989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:07.381 [2024-07-25 13:12:59.457892] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f31b8 00:17:07.381 [2024-07-25 13:12:59.460127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.381 [2024-07-25 13:12:59.460176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:07.381 [2024-07-25 13:12:59.473274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f2948 00:17:07.381 [2024-07-25 13:12:59.475146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.381 [2024-07-25 13:12:59.475192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:07.381 [2024-07-25 13:12:59.486722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f20d8 00:17:07.381 [2024-07-25 13:12:59.488557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.381 [2024-07-25 13:12:59.488602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:07.381 [2024-07-25 13:12:59.500525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f1868 00:17:07.381 [2024-07-25 13:12:59.502470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.381 [2024-07-25 13:12:59.502503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:07.381 [2024-07-25 13:12:59.515440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f0ff8 00:17:07.381 [2024-07-25 13:12:59.517529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.381 [2024-07-25 13:12:59.517563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:07.381 [2024-07-25 13:12:59.530493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f0788 00:17:07.381 [2024-07-25 13:12:59.532547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.381 [2024-07-25 13:12:59.532578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:07.381 [2024-07-25 13:12:59.544400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190eff18 00:17:07.381 [2024-07-25 13:12:59.546281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.381 [2024-07-25 13:12:59.546312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:07.381 [2024-07-25 13:12:59.558037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190ef6a8 00:17:07.381 [2024-07-25 13:12:59.559784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.381 [2024-07-25 13:12:59.559813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:07.639 [2024-07-25 13:12:59.571415] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190eee38 00:17:07.639 [2024-07-25 13:12:59.573311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.639 [2024-07-25 13:12:59.573342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:07.639 [2024-07-25 13:12:59.584887] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190ee5c8 00:17:07.639 [2024-07-25 13:12:59.586668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.639 [2024-07-25 13:12:59.586698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.639 [2024-07-25 13:12:59.599395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190edd58 00:17:07.639 [2024-07-25 13:12:59.601352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.639 [2024-07-25 13:12:59.601385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:07.639 [2024-07-25 13:12:59.613524] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190ed4e8 00:17:07.639 [2024-07-25 13:12:59.615253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.639 [2024-07-25 13:12:59.615284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:07.639 [2024-07-25 13:12:59.628328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190ecc78 00:17:07.639 [2024-07-25 13:12:59.630038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.639 [2024-07-25 13:12:59.630069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:07.639 [2024-07-25 13:12:59.641880] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190ec408 00:17:07.639 [2024-07-25 13:12:59.643476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.639 [2024-07-25 13:12:59.643506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:07.639 [2024-07-25 13:12:59.656925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190ebb98 00:17:07.639 [2024-07-25 13:12:59.658794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.639 [2024-07-25 13:12:59.658823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:07.639 [2024-07-25 13:12:59.670527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190eb328 00:17:07.639 [2024-07-25 13:12:59.672118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.639 [2024-07-25 13:12:59.672149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:07.639 [2024-07-25 13:12:59.683795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190eaab8 00:17:07.639 [2024-07-25 13:12:59.685488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.639 [2024-07-25 13:12:59.685519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:07.639 [2024-07-25 13:12:59.697817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190ea248 00:17:07.639 [2024-07-25 13:12:59.699429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.639 [2024-07-25 13:12:59.699460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:07.639 [2024-07-25 13:12:59.712075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e99d8 00:17:07.639 [2024-07-25 13:12:59.713641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.639 [2024-07-25 13:12:59.713673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:07.639 [2024-07-25 13:12:59.725893] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e9168 00:17:07.639 [2024-07-25 13:12:59.727661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.639 [2024-07-25 13:12:59.727691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:07.639 [2024-07-25 13:12:59.739959] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e88f8 00:17:07.639 [2024-07-25 13:12:59.741501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.639 [2024-07-25 13:12:59.741532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:07.639 [2024-07-25 13:12:59.754156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e8088 00:17:07.639 [2024-07-25 13:12:59.755645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.639 [2024-07-25 13:12:59.755675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:07.639 [2024-07-25 13:12:59.768140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e7818 00:17:07.639 [2024-07-25 13:12:59.769695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.639 [2024-07-25 13:12:59.769725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:07.639 [2024-07-25 13:12:59.783222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e6fa8 00:17:07.639 [2024-07-25 13:12:59.784939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.639 [2024-07-25 13:12:59.784975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:07.639 [2024-07-25 13:12:59.797819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e6738 00:17:07.639 [2024-07-25 13:12:59.799346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.639 [2024-07-25 13:12:59.799379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:07.639 [2024-07-25 13:12:59.811536] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e5ec8 00:17:07.639 [2024-07-25 13:12:59.813037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.639 [2024-07-25 13:12:59.813103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.639 [2024-07-25 13:12:59.824756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e5658 00:17:07.639 [2024-07-25 13:12:59.826257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.639 [2024-07-25 13:12:59.826286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:07.897 [2024-07-25 13:12:59.838327] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e4de8 00:17:07.897 [2024-07-25 13:12:59.839738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.897 [2024-07-25 13:12:59.839767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:07.897 [2024-07-25 13:12:59.851978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e4578 00:17:07.897 [2024-07-25 13:12:59.853490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.897 [2024-07-25 13:12:59.853541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:07.897 [2024-07-25 13:12:59.867033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e3d08 00:17:07.897 [2024-07-25 13:12:59.868521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.897 [2024-07-25 13:12:59.868554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:07.897 [2024-07-25 13:12:59.883034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e3498 00:17:07.897 [2024-07-25 13:12:59.884596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.897 [2024-07-25 13:12:59.884629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:07.897 [2024-07-25 13:12:59.899474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e2c28 00:17:07.897 [2024-07-25 13:12:59.900914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.897 [2024-07-25 13:12:59.900950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:07.897 [2024-07-25 13:12:59.913763] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e23b8 00:17:07.897 [2024-07-25 13:12:59.915129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.897 [2024-07-25 13:12:59.915157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:07.898 [2024-07-25 13:12:59.927591] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e1b48 00:17:07.898 [2024-07-25 13:12:59.928873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.898 [2024-07-25 13:12:59.928907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:07.898 [2024-07-25 13:12:59.940802] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e12d8 00:17:07.898 [2024-07-25 13:12:59.942227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.898 [2024-07-25 13:12:59.942258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:07.898 [2024-07-25 13:12:59.954356] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e0a68 00:17:07.898 [2024-07-25 13:12:59.955543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.898 [2024-07-25 13:12:59.955573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:07.898 [2024-07-25 13:12:59.967520] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e01f8 00:17:07.898 [2024-07-25 13:12:59.968698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.898 [2024-07-25 13:12:59.968749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:07.898 [2024-07-25 13:12:59.980694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190df988 00:17:07.898 [2024-07-25 13:12:59.982025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.898 [2024-07-25 13:12:59.982054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:07.898 [2024-07-25 13:12:59.994055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190df118 00:17:07.898 [2024-07-25 13:12:59.995276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.898 [2024-07-25 13:12:59.995305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:07.898 [2024-07-25 13:13:00.009267] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190de8a8 00:17:07.898 [2024-07-25 13:13:00.010545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.898 [2024-07-25 13:13:00.010575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:07.898 [2024-07-25 13:13:00.025105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190de038 00:17:07.898 [2024-07-25 13:13:00.026464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.898 [2024-07-25 13:13:00.026493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:07.898 [2024-07-25 13:13:00.044933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190de038 00:17:07.898 [2024-07-25 13:13:00.047111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.898 [2024-07-25 13:13:00.047142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:07.898 [2024-07-25 13:13:00.058218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190de8a8 00:17:07.898 [2024-07-25 13:13:00.060428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.898 [2024-07-25 13:13:00.060460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:07.898 [2024-07-25 13:13:00.072099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190df118 00:17:07.898 [2024-07-25 13:13:00.074418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:07.898 [2024-07-25 13:13:00.074448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:07.898 [2024-07-25 13:13:00.085771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190df988 00:17:08.156 [2024-07-25 13:13:00.088030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.156 [2024-07-25 13:13:00.088059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:08.156 [2024-07-25 13:13:00.099454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e01f8 00:17:08.156 [2024-07-25 13:13:00.101764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.156 [2024-07-25 13:13:00.101795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:08.156 [2024-07-25 13:13:00.113301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e0a68 00:17:08.156 [2024-07-25 13:13:00.115538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.156 [2024-07-25 13:13:00.115568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:08.156 [2024-07-25 13:13:00.127282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e12d8 00:17:08.156 [2024-07-25 13:13:00.129458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.156 [2024-07-25 13:13:00.129505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:08.156 [2024-07-25 13:13:00.141662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e1b48 00:17:08.157 [2024-07-25 13:13:00.143995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.157 [2024-07-25 13:13:00.144025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:08.157 [2024-07-25 13:13:00.156087] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e23b8 00:17:08.157 [2024-07-25 13:13:00.158313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.157 [2024-07-25 13:13:00.158345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:08.157 [2024-07-25 13:13:00.169970] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e2c28 00:17:08.157 [2024-07-25 13:13:00.172061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.157 [2024-07-25 13:13:00.172092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:08.157 [2024-07-25 13:13:00.183412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e3498 00:17:08.157 [2024-07-25 13:13:00.185523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.157 [2024-07-25 13:13:00.185553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:08.157 [2024-07-25 13:13:00.197575] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e3d08 00:17:08.157 [2024-07-25 13:13:00.199991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.157 [2024-07-25 13:13:00.200021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:08.157 [2024-07-25 13:13:00.213007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e4578 00:17:08.157 [2024-07-25 13:13:00.215276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.157 [2024-07-25 13:13:00.215306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:08.157 [2024-07-25 13:13:00.226802] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e4de8 00:17:08.157 [2024-07-25 13:13:00.228887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.157 [2024-07-25 13:13:00.228920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:08.157 [2024-07-25 13:13:00.240549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e5658 00:17:08.157 [2024-07-25 13:13:00.242863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.157 [2024-07-25 13:13:00.242919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:08.157 [2024-07-25 13:13:00.255439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e5ec8 00:17:08.157 [2024-07-25 13:13:00.257514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.157 [2024-07-25 13:13:00.257561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:08.157 [2024-07-25 13:13:00.269006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e6738 00:17:08.157 [2024-07-25 13:13:00.270896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.157 [2024-07-25 13:13:00.270927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:08.157 [2024-07-25 13:13:00.282660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e6fa8 00:17:08.157 [2024-07-25 13:13:00.284825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.157 [2024-07-25 13:13:00.284885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:08.157 [2024-07-25 13:13:00.297215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e7818 00:17:08.157 [2024-07-25 13:13:00.299240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.157 [2024-07-25 13:13:00.299286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:08.157 [2024-07-25 13:13:00.310763] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e8088 00:17:08.157 [2024-07-25 13:13:00.312716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.157 [2024-07-25 13:13:00.312769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:08.157 [2024-07-25 13:13:00.324115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e88f8 00:17:08.157 [2024-07-25 13:13:00.325966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.157 [2024-07-25 13:13:00.325995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:08.157 [2024-07-25 13:13:00.337614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e9168 00:17:08.157 [2024-07-25 13:13:00.339700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.157 [2024-07-25 13:13:00.339730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:08.416 [2024-07-25 13:13:00.352940] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190e99d8 00:17:08.416 [2024-07-25 13:13:00.355119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.416 [2024-07-25 13:13:00.355151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:08.416 [2024-07-25 13:13:00.368391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190ea248 00:17:08.416 [2024-07-25 13:13:00.370431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.416 [2024-07-25 13:13:00.370464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:08.416 [2024-07-25 13:13:00.383372] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190eaab8 00:17:08.416 [2024-07-25 13:13:00.385379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.416 [2024-07-25 13:13:00.385411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:08.416 [2024-07-25 13:13:00.397935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190eb328 00:17:08.416 [2024-07-25 13:13:00.399811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.416 [2024-07-25 13:13:00.399873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:08.416 [2024-07-25 13:13:00.412422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190ebb98 00:17:08.416 [2024-07-25 13:13:00.414383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.416 [2024-07-25 13:13:00.414416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:08.416 [2024-07-25 13:13:00.427000] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190ec408 00:17:08.416 [2024-07-25 13:13:00.428836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.416 [2024-07-25 13:13:00.428898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:08.416 [2024-07-25 13:13:00.440910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190ecc78 00:17:08.416 [2024-07-25 13:13:00.443024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.416 [2024-07-25 13:13:00.443056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:08.416 [2024-07-25 13:13:00.455416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190ed4e8 00:17:08.416 [2024-07-25 13:13:00.457335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.416 [2024-07-25 13:13:00.457365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:08.416 [2024-07-25 13:13:00.470585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190edd58 00:17:08.416 [2024-07-25 13:13:00.472604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.416 [2024-07-25 13:13:00.472636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:08.416 [2024-07-25 13:13:00.486159] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190ee5c8 00:17:08.416 [2024-07-25 13:13:00.488001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.416 [2024-07-25 13:13:00.488033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:08.416 [2024-07-25 13:13:00.500336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190eee38 00:17:08.416 [2024-07-25 13:13:00.502248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.416 [2024-07-25 13:13:00.502298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:08.416 [2024-07-25 13:13:00.515797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190ef6a8 00:17:08.416 [2024-07-25 13:13:00.517806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.416 [2024-07-25 13:13:00.517864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:08.416 [2024-07-25 13:13:00.531425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190eff18 00:17:08.416 [2024-07-25 13:13:00.533381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.416 [2024-07-25 13:13:00.533413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:08.416 [2024-07-25 13:13:00.546009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f0788 00:17:08.416 [2024-07-25 13:13:00.547761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.416 [2024-07-25 13:13:00.547791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:08.416 [2024-07-25 13:13:00.560469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f0ff8 00:17:08.416 [2024-07-25 13:13:00.562236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.416 [2024-07-25 13:13:00.562268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:08.416 [2024-07-25 13:13:00.574706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f1868 00:17:08.416 [2024-07-25 13:13:00.576454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.416 [2024-07-25 13:13:00.576498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:08.416 [2024-07-25 13:13:00.589220] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f20d8 00:17:08.416 [2024-07-25 13:13:00.590797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.416 [2024-07-25 13:13:00.590841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:08.417 [2024-07-25 13:13:00.603076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f2948 00:17:08.417 [2024-07-25 13:13:00.604761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.417 [2024-07-25 13:13:00.604790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:08.676 [2024-07-25 13:13:00.617330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f31b8 00:17:08.676 [2024-07-25 13:13:00.618962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.676 [2024-07-25 13:13:00.619006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:08.676 [2024-07-25 13:13:00.631365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f3a28 00:17:08.676 [2024-07-25 13:13:00.633038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.676 [2024-07-25 13:13:00.633112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:08.676 [2024-07-25 13:13:00.645707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f4298 00:17:08.676 [2024-07-25 13:13:00.647376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.676 [2024-07-25 13:13:00.647406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:08.676 [2024-07-25 13:13:00.661759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f4b08 00:17:08.676 [2024-07-25 13:13:00.663415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.676 [2024-07-25 13:13:00.663446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:08.676 [2024-07-25 13:13:00.677403] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f5378 00:17:08.676 [2024-07-25 13:13:00.679017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.676 [2024-07-25 13:13:00.679048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:08.676 [2024-07-25 13:13:00.691523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f5be8 00:17:08.676 [2024-07-25 13:13:00.693137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.676 [2024-07-25 13:13:00.693166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:08.676 [2024-07-25 13:13:00.707149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f6458 00:17:08.676 [2024-07-25 13:13:00.708931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.676 [2024-07-25 13:13:00.708966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:08.676 [2024-07-25 13:13:00.722246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f6cc8 00:17:08.676 [2024-07-25 13:13:00.723730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.676 [2024-07-25 13:13:00.723774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:08.676 [2024-07-25 13:13:00.737738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f7538 00:17:08.676 [2024-07-25 13:13:00.739441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.676 [2024-07-25 13:13:00.739487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:08.676 [2024-07-25 13:13:00.754179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f7da8 00:17:08.676 [2024-07-25 13:13:00.756037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.676 [2024-07-25 13:13:00.756099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:08.676 [2024-07-25 13:13:00.770254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f8618 00:17:08.676 [2024-07-25 13:13:00.771696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.676 [2024-07-25 13:13:00.771739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:08.676 [2024-07-25 13:13:00.784370] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f8e88 00:17:08.676 [2024-07-25 13:13:00.785863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.676 [2024-07-25 13:13:00.785936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:08.676 [2024-07-25 13:13:00.798043] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f96f8 00:17:08.676 [2024-07-25 13:13:00.799364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.676 [2024-07-25 13:13:00.799407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:08.676 [2024-07-25 13:13:00.812077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190f9f68 00:17:08.676 [2024-07-25 13:13:00.813508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.676 [2024-07-25 13:13:00.813550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:08.676 [2024-07-25 13:13:00.825580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190fa7d8 00:17:08.676 [2024-07-25 13:13:00.826928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.676 [2024-07-25 13:13:00.826956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:08.676 [2024-07-25 13:13:00.840062] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190fb048 00:17:08.676 [2024-07-25 13:13:00.841646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.676 [2024-07-25 13:13:00.841692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:08.676 [2024-07-25 13:13:00.855413] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190fb8b8 00:17:08.676 [2024-07-25 13:13:00.856773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.676 [2024-07-25 13:13:00.856805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:08.935 [2024-07-25 13:13:00.869951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190fc128 00:17:08.935 [2024-07-25 13:13:00.871361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.935 [2024-07-25 13:13:00.871422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:08.935 [2024-07-25 13:13:00.885335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190fc998 00:17:08.935 [2024-07-25 13:13:00.886735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.935 [2024-07-25 13:13:00.886779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:08.935 [2024-07-25 13:13:00.900649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190fd208 00:17:08.936 [2024-07-25 13:13:00.902068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.936 [2024-07-25 13:13:00.902114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:08.936 [2024-07-25 13:13:00.915778] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190fda78 00:17:08.936 [2024-07-25 13:13:00.917022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.936 [2024-07-25 13:13:00.917055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:08.936 [2024-07-25 13:13:00.929893] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190fe2e8 00:17:08.936 [2024-07-25 13:13:00.931086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.936 [2024-07-25 13:13:00.931129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:08.936 [2024-07-25 13:13:00.943727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190fef90 00:17:08.936 [2024-07-25 13:13:00.944967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.936 [2024-07-25 13:13:00.944998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:08.936 [2024-07-25 13:13:00.963191] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190fe720 00:17:08.936 [2024-07-25 13:13:00.965511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.936 [2024-07-25 13:13:00.965554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.936 [2024-07-25 13:13:00.977188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190fef90 00:17:08.936 [2024-07-25 13:13:00.979420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.936 [2024-07-25 13:13:00.979462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:08.936 [2024-07-25 13:13:00.990914] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190fe2e8 00:17:08.936 [2024-07-25 13:13:00.993135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.936 [2024-07-25 13:13:00.993181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:08.936 [2024-07-25 13:13:01.004594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190fda78 00:17:08.936 [2024-07-25 13:13:01.006822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.936 [2024-07-25 13:13:01.006889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:08.936 [2024-07-25 13:13:01.018250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190fd208 00:17:08.936 [2024-07-25 13:13:01.020623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.936 [2024-07-25 13:13:01.020668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:08.936 [2024-07-25 13:13:01.034131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190fc998 00:17:08.936 [2024-07-25 13:13:01.036773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.936 [2024-07-25 13:13:01.036805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:08.936 [2024-07-25 13:13:01.050058] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190fc128 00:17:08.936 [2024-07-25 13:13:01.052492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.936 [2024-07-25 13:13:01.052535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:08.936 [2024-07-25 13:13:01.064669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190fb8b8 00:17:08.936 [2024-07-25 13:13:01.066859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.936 [2024-07-25 13:13:01.066909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:08.936 [2024-07-25 13:13:01.078323] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190fb048 00:17:08.936 [2024-07-25 13:13:01.080440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.936 [2024-07-25 13:13:01.080481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:08.936 [2024-07-25 13:13:01.091440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfe6650) with pdu=0x2000190fa7d8 00:17:08.936 [2024-07-25 13:13:01.093576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.936 [2024-07-25 13:13:01.093620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:08.936 00:17:08.936 Latency(us) 00:17:08.936 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.936 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:08.936 nvme0n1 : 2.00 17635.25 68.89 0.00 0.00 7251.87 5391.83 26452.71 00:17:08.936 =================================================================================================================== 00:17:08.936 Total : 17635.25 68.89 0.00 0.00 7251.87 5391.83 26452.71 00:17:08.936 0 00:17:08.936 13:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:08.936 13:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:08.936 13:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:08.936 | .driver_specific 00:17:08.936 | .nvme_error 00:17:08.936 | .status_code 00:17:08.936 | .command_transient_transport_error' 00:17:08.936 13:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:09.195 13:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 138 > 0 )) 00:17:09.195 13:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79538 00:17:09.195 13:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79538 ']' 00:17:09.195 13:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79538 00:17:09.195 13:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:17:09.195 13:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:09.195 13:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79538 00:17:09.453 13:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:09.453 killing process with pid 79538 00:17:09.453 13:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:09.453 13:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79538' 00:17:09.453 Received shutdown signal, test time was about 2.000000 seconds 00:17:09.453 00:17:09.453 Latency(us) 00:17:09.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.453 =================================================================================================================== 00:17:09.453 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:09.453 13:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79538 00:17:09.453 13:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79538 00:17:09.453 13:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:17:09.453 13:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:09.453 13:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:09.453 13:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:09.453 13:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:09.453 13:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79597 00:17:09.453 13:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79597 /var/tmp/bperf.sock 00:17:09.453 13:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:17:09.453 13:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79597 ']' 00:17:09.453 13:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:09.453 13:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:09.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:09.453 13:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:09.453 13:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:09.453 13:13:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:09.712 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:09.712 Zero copy mechanism will not be used. 00:17:09.712 [2024-07-25 13:13:01.681982] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:09.712 [2024-07-25 13:13:01.682076] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79597 ] 00:17:09.712 [2024-07-25 13:13:01.824989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.970 [2024-07-25 13:13:01.932353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.970 [2024-07-25 13:13:01.992117] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:10.537 13:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:10.537 13:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:17:10.537 13:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:10.537 13:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:10.796 13:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:10.796 13:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.796 13:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:10.796 13:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.796 13:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:10.796 13:13:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:11.419 nvme0n1 00:17:11.419 13:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:11.419 13:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.419 13:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:11.419 13:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.419 13:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:11.419 13:13:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:11.419 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:11.419 Zero copy mechanism will not be used. 00:17:11.419 Running I/O for 2 seconds... 00:17:11.419 [2024-07-25 13:13:03.450648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.419 [2024-07-25 13:13:03.450993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.419 [2024-07-25 13:13:03.451022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.419 [2024-07-25 13:13:03.456412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.419 [2024-07-25 13:13:03.456805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.419 [2024-07-25 13:13:03.456829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.419 [2024-07-25 13:13:03.461769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.419 [2024-07-25 13:13:03.462120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.419 [2024-07-25 13:13:03.462143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.419 [2024-07-25 13:13:03.467014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.419 [2024-07-25 13:13:03.467315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.419 [2024-07-25 13:13:03.467341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.419 [2024-07-25 13:13:03.472430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.419 [2024-07-25 13:13:03.472760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.419 [2024-07-25 13:13:03.472786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.419 [2024-07-25 13:13:03.477519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.419 [2024-07-25 13:13:03.477868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.419 [2024-07-25 13:13:03.477921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.419 [2024-07-25 13:13:03.482946] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.419 [2024-07-25 13:13:03.483279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.419 [2024-07-25 13:13:03.483306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.419 [2024-07-25 13:13:03.488361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.419 [2024-07-25 13:13:03.488699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.419 [2024-07-25 13:13:03.488783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.419 [2024-07-25 13:13:03.493614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.419 [2024-07-25 13:13:03.493945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.419 [2024-07-25 13:13:03.493983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.419 [2024-07-25 13:13:03.498770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.419 [2024-07-25 13:13:03.499079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.420 [2024-07-25 13:13:03.499104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.420 [2024-07-25 13:13:03.503964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.420 [2024-07-25 13:13:03.504266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.420 [2024-07-25 13:13:03.504293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.420 [2024-07-25 13:13:03.509604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.420 [2024-07-25 13:13:03.509912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.420 [2024-07-25 13:13:03.509934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.420 [2024-07-25 13:13:03.515100] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.420 [2024-07-25 13:13:03.515459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.420 [2024-07-25 13:13:03.515486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.420 [2024-07-25 13:13:03.520436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.420 [2024-07-25 13:13:03.520767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.420 [2024-07-25 13:13:03.520794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.420 [2024-07-25 13:13:03.525652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.420 [2024-07-25 13:13:03.525953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.420 [2024-07-25 13:13:03.526004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.420 [2024-07-25 13:13:03.530762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.420 [2024-07-25 13:13:03.531044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.420 [2024-07-25 13:13:03.531068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.420 [2024-07-25 13:13:03.536140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.420 [2024-07-25 13:13:03.536488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.420 [2024-07-25 13:13:03.536515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.420 [2024-07-25 13:13:03.541577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.420 [2024-07-25 13:13:03.541853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.420 [2024-07-25 13:13:03.541889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.420 [2024-07-25 13:13:03.547391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.420 [2024-07-25 13:13:03.547708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.420 [2024-07-25 13:13:03.547735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.420 [2024-07-25 13:13:03.553043] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.420 [2024-07-25 13:13:03.553347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.420 [2024-07-25 13:13:03.553375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.420 [2024-07-25 13:13:03.558586] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.420 [2024-07-25 13:13:03.558917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.420 [2024-07-25 13:13:03.558942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.420 [2024-07-25 13:13:03.564286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.420 [2024-07-25 13:13:03.564632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.420 [2024-07-25 13:13:03.564658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.420 [2024-07-25 13:13:03.569863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.420 [2024-07-25 13:13:03.570192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.420 [2024-07-25 13:13:03.570219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.420 [2024-07-25 13:13:03.575467] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.420 [2024-07-25 13:13:03.575769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.420 [2024-07-25 13:13:03.575818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.420 [2024-07-25 13:13:03.581262] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.420 [2024-07-25 13:13:03.581576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.420 [2024-07-25 13:13:03.581603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.420 [2024-07-25 13:13:03.586744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.420 [2024-07-25 13:13:03.587090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.420 [2024-07-25 13:13:03.587117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.420 [2024-07-25 13:13:03.591948] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.420 [2024-07-25 13:13:03.592286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.420 [2024-07-25 13:13:03.592308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.420 [2024-07-25 13:13:03.597174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.420 [2024-07-25 13:13:03.597463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.420 [2024-07-25 13:13:03.597488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.420 [2024-07-25 13:13:03.602258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.420 [2024-07-25 13:13:03.602537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.420 [2024-07-25 13:13:03.602563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.420 [2024-07-25 13:13:03.607352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.420 [2024-07-25 13:13:03.607647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.420 [2024-07-25 13:13:03.607672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.679 [2024-07-25 13:13:03.612707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.679 [2024-07-25 13:13:03.613048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.679 [2024-07-25 13:13:03.613071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.679 [2024-07-25 13:13:03.617940] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.679 [2024-07-25 13:13:03.618303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.679 [2024-07-25 13:13:03.618326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.679 [2024-07-25 13:13:03.623326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.679 [2024-07-25 13:13:03.623620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.679 [2024-07-25 13:13:03.623645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.679 [2024-07-25 13:13:03.628492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.679 [2024-07-25 13:13:03.628820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.679 [2024-07-25 13:13:03.628855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.679 [2024-07-25 13:13:03.633598] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.679 [2024-07-25 13:13:03.633949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.679 [2024-07-25 13:13:03.633969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.679 [2024-07-25 13:13:03.638856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.679 [2024-07-25 13:13:03.639202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.679 [2024-07-25 13:13:03.639224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.680 [2024-07-25 13:13:03.644376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.680 [2024-07-25 13:13:03.644710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.680 [2024-07-25 13:13:03.644763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.680 [2024-07-25 13:13:03.650167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.680 [2024-07-25 13:13:03.650476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.680 [2024-07-25 13:13:03.650503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.680 [2024-07-25 13:13:03.655639] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.680 [2024-07-25 13:13:03.655926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.680 [2024-07-25 13:13:03.655962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.680 [2024-07-25 13:13:03.661261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.680 [2024-07-25 13:13:03.661589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.680 [2024-07-25 13:13:03.661614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.680 [2024-07-25 13:13:03.666878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.680 [2024-07-25 13:13:03.667198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.680 [2024-07-25 13:13:03.667224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.680 [2024-07-25 13:13:03.672321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.680 [2024-07-25 13:13:03.672657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.680 [2024-07-25 13:13:03.672681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.680 [2024-07-25 13:13:03.677922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.680 [2024-07-25 13:13:03.678289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.680 [2024-07-25 13:13:03.678311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.680 [2024-07-25 13:13:03.683505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.680 [2024-07-25 13:13:03.683785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.680 [2024-07-25 13:13:03.683809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.680 [2024-07-25 13:13:03.688760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.680 [2024-07-25 13:13:03.689071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.680 [2024-07-25 13:13:03.689110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.680 [2024-07-25 13:13:03.694703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.680 [2024-07-25 13:13:03.695035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.680 [2024-07-25 13:13:03.695062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.680 [2024-07-25 13:13:03.700095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.680 [2024-07-25 13:13:03.700423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.680 [2024-07-25 13:13:03.700449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.680 [2024-07-25 13:13:03.705514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.680 [2024-07-25 13:13:03.705807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.680 [2024-07-25 13:13:03.705842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.680 [2024-07-25 13:13:03.711240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.680 [2024-07-25 13:13:03.711567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.680 [2024-07-25 13:13:03.711607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.680 [2024-07-25 13:13:03.717077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.680 [2024-07-25 13:13:03.717398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.680 [2024-07-25 13:13:03.717426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.680 [2024-07-25 13:13:03.722742] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.680 [2024-07-25 13:13:03.723056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.680 [2024-07-25 13:13:03.723081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.680 [2024-07-25 13:13:03.728088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.680 [2024-07-25 13:13:03.728420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.680 [2024-07-25 13:13:03.728447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.680 [2024-07-25 13:13:03.733561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.680 [2024-07-25 13:13:03.733824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.680 [2024-07-25 13:13:03.733856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.680 [2024-07-25 13:13:03.738442] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.680 [2024-07-25 13:13:03.738706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.680 [2024-07-25 13:13:03.738731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.680 [2024-07-25 13:13:03.743178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.680 [2024-07-25 13:13:03.743474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.680 [2024-07-25 13:13:03.743495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.680 [2024-07-25 13:13:03.747927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.680 [2024-07-25 13:13:03.748225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.680 [2024-07-25 13:13:03.748249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.680 [2024-07-25 13:13:03.752781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.680 [2024-07-25 13:13:03.753155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.680 [2024-07-25 13:13:03.753176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.680 [2024-07-25 13:13:03.757701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.680 [2024-07-25 13:13:03.757994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.680 [2024-07-25 13:13:03.758017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.680 [2024-07-25 13:13:03.762463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.680 [2024-07-25 13:13:03.762726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.680 [2024-07-25 13:13:03.762750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.680 [2024-07-25 13:13:03.767740] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.680 [2024-07-25 13:13:03.768042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.680 [2024-07-25 13:13:03.768066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.680 [2024-07-25 13:13:03.773268] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.680 [2024-07-25 13:13:03.773594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.680 [2024-07-25 13:13:03.773619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.680 [2024-07-25 13:13:03.778680] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.680 [2024-07-25 13:13:03.779010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.680 [2024-07-25 13:13:03.779035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.680 [2024-07-25 13:13:03.784115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.681 [2024-07-25 13:13:03.784470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.681 [2024-07-25 13:13:03.784492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.681 [2024-07-25 13:13:03.789851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.681 [2024-07-25 13:13:03.790146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.681 [2024-07-25 13:13:03.790170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.681 [2024-07-25 13:13:03.795426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.681 [2024-07-25 13:13:03.795748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.681 [2024-07-25 13:13:03.795773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.681 [2024-07-25 13:13:03.800957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.681 [2024-07-25 13:13:03.801276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.681 [2024-07-25 13:13:03.801304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.681 [2024-07-25 13:13:03.806469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.681 [2024-07-25 13:13:03.806800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.681 [2024-07-25 13:13:03.806825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.681 [2024-07-25 13:13:03.812044] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.681 [2024-07-25 13:13:03.812375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.681 [2024-07-25 13:13:03.812402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.681 [2024-07-25 13:13:03.817670] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.681 [2024-07-25 13:13:03.817955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.681 [2024-07-25 13:13:03.817990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.681 [2024-07-25 13:13:03.823183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.681 [2024-07-25 13:13:03.823518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.681 [2024-07-25 13:13:03.823544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.681 [2024-07-25 13:13:03.828626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.681 [2024-07-25 13:13:03.828975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.681 [2024-07-25 13:13:03.829002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.681 [2024-07-25 13:13:03.834267] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.681 [2024-07-25 13:13:03.834614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.681 [2024-07-25 13:13:03.834639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.681 [2024-07-25 13:13:03.839651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.681 [2024-07-25 13:13:03.839935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.681 [2024-07-25 13:13:03.839982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.681 [2024-07-25 13:13:03.845184] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.681 [2024-07-25 13:13:03.845528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.681 [2024-07-25 13:13:03.845556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.681 [2024-07-25 13:13:03.850560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.681 [2024-07-25 13:13:03.850866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.681 [2024-07-25 13:13:03.850901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.681 [2024-07-25 13:13:03.855485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.681 [2024-07-25 13:13:03.855766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.681 [2024-07-25 13:13:03.855802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.681 [2024-07-25 13:13:03.860800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.681 [2024-07-25 13:13:03.861119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.681 [2024-07-25 13:13:03.861141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.681 [2024-07-25 13:13:03.866032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.681 [2024-07-25 13:13:03.866328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.681 [2024-07-25 13:13:03.866354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.941 [2024-07-25 13:13:03.871660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.941 [2024-07-25 13:13:03.871970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.941 [2024-07-25 13:13:03.872004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.941 [2024-07-25 13:13:03.876912] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.941 [2024-07-25 13:13:03.877306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.941 [2024-07-25 13:13:03.877327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.941 [2024-07-25 13:13:03.882408] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.941 [2024-07-25 13:13:03.882764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.941 [2024-07-25 13:13:03.882789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.941 [2024-07-25 13:13:03.888027] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.941 [2024-07-25 13:13:03.888350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.941 [2024-07-25 13:13:03.888376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.941 [2024-07-25 13:13:03.893660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.941 [2024-07-25 13:13:03.893973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.941 [2024-07-25 13:13:03.894010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.941 [2024-07-25 13:13:03.899243] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.941 [2024-07-25 13:13:03.899538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.941 [2024-07-25 13:13:03.899609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.941 [2024-07-25 13:13:03.904544] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.941 [2024-07-25 13:13:03.904921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.941 [2024-07-25 13:13:03.904949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.941 [2024-07-25 13:13:03.910211] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.941 [2024-07-25 13:13:03.910593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.941 [2024-07-25 13:13:03.910632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.941 [2024-07-25 13:13:03.915885] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.941 [2024-07-25 13:13:03.916229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.941 [2024-07-25 13:13:03.916256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.941 [2024-07-25 13:13:03.921613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.941 [2024-07-25 13:13:03.921905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.941 [2024-07-25 13:13:03.921940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.941 [2024-07-25 13:13:03.927132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.941 [2024-07-25 13:13:03.927447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.941 [2024-07-25 13:13:03.927490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.941 [2024-07-25 13:13:03.932667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.941 [2024-07-25 13:13:03.933016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.941 [2024-07-25 13:13:03.933038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.941 [2024-07-25 13:13:03.938434] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.941 [2024-07-25 13:13:03.938754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.941 [2024-07-25 13:13:03.938781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.941 [2024-07-25 13:13:03.943929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.942 [2024-07-25 13:13:03.944248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.942 [2024-07-25 13:13:03.944276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.942 [2024-07-25 13:13:03.949534] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.942 [2024-07-25 13:13:03.949888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.942 [2024-07-25 13:13:03.949910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.942 [2024-07-25 13:13:03.955134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.942 [2024-07-25 13:13:03.955451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.942 [2024-07-25 13:13:03.955479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.942 [2024-07-25 13:13:03.960963] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.942 [2024-07-25 13:13:03.961300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.942 [2024-07-25 13:13:03.961327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.942 [2024-07-25 13:13:03.966449] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.942 [2024-07-25 13:13:03.966766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.942 [2024-07-25 13:13:03.966791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.942 [2024-07-25 13:13:03.972077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.942 [2024-07-25 13:13:03.972413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.942 [2024-07-25 13:13:03.972440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.942 [2024-07-25 13:13:03.977746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.942 [2024-07-25 13:13:03.978067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.942 [2024-07-25 13:13:03.978088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.942 [2024-07-25 13:13:03.983529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.942 [2024-07-25 13:13:03.983841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.942 [2024-07-25 13:13:03.983875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.942 [2024-07-25 13:13:03.989188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.942 [2024-07-25 13:13:03.989501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.942 [2024-07-25 13:13:03.989528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.942 [2024-07-25 13:13:03.994643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.942 [2024-07-25 13:13:03.994949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.942 [2024-07-25 13:13:03.994976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.942 [2024-07-25 13:13:04.000254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.942 [2024-07-25 13:13:04.000589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.942 [2024-07-25 13:13:04.000628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.942 [2024-07-25 13:13:04.005617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.942 [2024-07-25 13:13:04.005923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.942 [2024-07-25 13:13:04.005945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.942 [2024-07-25 13:13:04.010946] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.942 [2024-07-25 13:13:04.011248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.942 [2024-07-25 13:13:04.011270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.942 [2024-07-25 13:13:04.016627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.942 [2024-07-25 13:13:04.016945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.942 [2024-07-25 13:13:04.016972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.942 [2024-07-25 13:13:04.022160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.942 [2024-07-25 13:13:04.022514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.942 [2024-07-25 13:13:04.022540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.942 [2024-07-25 13:13:04.027631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.942 [2024-07-25 13:13:04.027968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.942 [2024-07-25 13:13:04.027995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.942 [2024-07-25 13:13:04.033052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.942 [2024-07-25 13:13:04.033357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.942 [2024-07-25 13:13:04.033378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.942 [2024-07-25 13:13:04.038590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.942 [2024-07-25 13:13:04.038886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.942 [2024-07-25 13:13:04.038920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.942 [2024-07-25 13:13:04.044177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.942 [2024-07-25 13:13:04.044484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.942 [2024-07-25 13:13:04.044511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.942 [2024-07-25 13:13:04.049702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.942 [2024-07-25 13:13:04.050014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.942 [2024-07-25 13:13:04.050040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.942 [2024-07-25 13:13:04.055158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.942 [2024-07-25 13:13:04.055454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.942 [2024-07-25 13:13:04.055488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.942 [2024-07-25 13:13:04.060712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.942 [2024-07-25 13:13:04.061041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.942 [2024-07-25 13:13:04.061068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.942 [2024-07-25 13:13:04.066319] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.942 [2024-07-25 13:13:04.066655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.942 [2024-07-25 13:13:04.066680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.942 [2024-07-25 13:13:04.071927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.942 [2024-07-25 13:13:04.072243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.942 [2024-07-25 13:13:04.072270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.942 [2024-07-25 13:13:04.077497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.942 [2024-07-25 13:13:04.077830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.942 [2024-07-25 13:13:04.077863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.942 [2024-07-25 13:13:04.083192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.942 [2024-07-25 13:13:04.083488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.942 [2024-07-25 13:13:04.083515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.942 [2024-07-25 13:13:04.088770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.943 [2024-07-25 13:13:04.089093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.943 [2024-07-25 13:13:04.089119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.943 [2024-07-25 13:13:04.094217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.943 [2024-07-25 13:13:04.094522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.943 [2024-07-25 13:13:04.094549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.943 [2024-07-25 13:13:04.099822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.943 [2024-07-25 13:13:04.100158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.943 [2024-07-25 13:13:04.100184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.943 [2024-07-25 13:13:04.105259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.943 [2024-07-25 13:13:04.105594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.943 [2024-07-25 13:13:04.105651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.943 [2024-07-25 13:13:04.110943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.943 [2024-07-25 13:13:04.111237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.943 [2024-07-25 13:13:04.111263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.943 [2024-07-25 13:13:04.116435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.943 [2024-07-25 13:13:04.116766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.943 [2024-07-25 13:13:04.116788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:11.943 [2024-07-25 13:13:04.122038] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.943 [2024-07-25 13:13:04.122334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.943 [2024-07-25 13:13:04.122360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:11.943 [2024-07-25 13:13:04.127720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:11.943 [2024-07-25 13:13:04.128077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:11.943 [2024-07-25 13:13:04.128104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.202 [2024-07-25 13:13:04.133411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.202 [2024-07-25 13:13:04.133737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.202 [2024-07-25 13:13:04.133762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.202 [2024-07-25 13:13:04.139001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.202 [2024-07-25 13:13:04.139350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.203 [2024-07-25 13:13:04.139377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.203 [2024-07-25 13:13:04.144623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.203 [2024-07-25 13:13:04.145003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.203 [2024-07-25 13:13:04.145031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.203 [2024-07-25 13:13:04.149964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.203 [2024-07-25 13:13:04.150305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.203 [2024-07-25 13:13:04.150331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.203 [2024-07-25 13:13:04.155392] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.203 [2024-07-25 13:13:04.155726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.203 [2024-07-25 13:13:04.155751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.203 [2024-07-25 13:13:04.160991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.203 [2024-07-25 13:13:04.161286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.203 [2024-07-25 13:13:04.161314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.203 [2024-07-25 13:13:04.166768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.203 [2024-07-25 13:13:04.167156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.203 [2024-07-25 13:13:04.167183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.203 [2024-07-25 13:13:04.172430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.203 [2024-07-25 13:13:04.172779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.203 [2024-07-25 13:13:04.172806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.203 [2024-07-25 13:13:04.177905] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.203 [2024-07-25 13:13:04.178227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.203 [2024-07-25 13:13:04.178253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.203 [2024-07-25 13:13:04.183512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.203 [2024-07-25 13:13:04.183793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.203 [2024-07-25 13:13:04.183817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.203 [2024-07-25 13:13:04.189101] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.203 [2024-07-25 13:13:04.189411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.203 [2024-07-25 13:13:04.189452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.203 [2024-07-25 13:13:04.194549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.203 [2024-07-25 13:13:04.194814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.203 [2024-07-25 13:13:04.194845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.203 [2024-07-25 13:13:04.199992] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.203 [2024-07-25 13:13:04.200330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.203 [2024-07-25 13:13:04.200357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.203 [2024-07-25 13:13:04.205312] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.203 [2024-07-25 13:13:04.205574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.203 [2024-07-25 13:13:04.205594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.203 [2024-07-25 13:13:04.210433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.203 [2024-07-25 13:13:04.210777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.203 [2024-07-25 13:13:04.210798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.203 [2024-07-25 13:13:04.216315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.203 [2024-07-25 13:13:04.216667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.203 [2024-07-25 13:13:04.216693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.203 [2024-07-25 13:13:04.221961] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.203 [2024-07-25 13:13:04.222290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.203 [2024-07-25 13:13:04.222317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.203 [2024-07-25 13:13:04.227748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.203 [2024-07-25 13:13:04.228087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.203 [2024-07-25 13:13:04.228113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.203 [2024-07-25 13:13:04.233375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.203 [2024-07-25 13:13:04.233729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.203 [2024-07-25 13:13:04.233772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.203 [2024-07-25 13:13:04.239068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.203 [2024-07-25 13:13:04.239362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.203 [2024-07-25 13:13:04.239388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.203 [2024-07-25 13:13:04.244511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.203 [2024-07-25 13:13:04.244860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.203 [2024-07-25 13:13:04.244886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.203 [2024-07-25 13:13:04.250095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.203 [2024-07-25 13:13:04.250395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.203 [2024-07-25 13:13:04.250421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.203 [2024-07-25 13:13:04.255584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.203 [2024-07-25 13:13:04.255986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.203 [2024-07-25 13:13:04.256024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.203 [2024-07-25 13:13:04.261326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.203 [2024-07-25 13:13:04.261656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.203 [2024-07-25 13:13:04.261682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.203 [2024-07-25 13:13:04.266845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.203 [2024-07-25 13:13:04.267201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.203 [2024-07-25 13:13:04.267231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.203 [2024-07-25 13:13:04.272343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.203 [2024-07-25 13:13:04.272677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.203 [2024-07-25 13:13:04.272701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.203 [2024-07-25 13:13:04.278039] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.204 [2024-07-25 13:13:04.278397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.204 [2024-07-25 13:13:04.278424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.204 [2024-07-25 13:13:04.283793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.204 [2024-07-25 13:13:04.284116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.204 [2024-07-25 13:13:04.284141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.204 [2024-07-25 13:13:04.289484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.204 [2024-07-25 13:13:04.289811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.204 [2024-07-25 13:13:04.289859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.204 [2024-07-25 13:13:04.295068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.204 [2024-07-25 13:13:04.295415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.204 [2024-07-25 13:13:04.295442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.204 [2024-07-25 13:13:04.300640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.204 [2024-07-25 13:13:04.301002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.204 [2024-07-25 13:13:04.301028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.204 [2024-07-25 13:13:04.306209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.204 [2024-07-25 13:13:04.306503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.204 [2024-07-25 13:13:04.306530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.204 [2024-07-25 13:13:04.311721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.204 [2024-07-25 13:13:04.312010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.204 [2024-07-25 13:13:04.312035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.204 [2024-07-25 13:13:04.317228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.204 [2024-07-25 13:13:04.317524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.204 [2024-07-25 13:13:04.317583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.204 [2024-07-25 13:13:04.322785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.204 [2024-07-25 13:13:04.323078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.204 [2024-07-25 13:13:04.323119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.204 [2024-07-25 13:13:04.328457] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.204 [2024-07-25 13:13:04.328820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.204 [2024-07-25 13:13:04.328856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.204 [2024-07-25 13:13:04.333995] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.204 [2024-07-25 13:13:04.334298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.204 [2024-07-25 13:13:04.334325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.204 [2024-07-25 13:13:04.339393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.204 [2024-07-25 13:13:04.339737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.204 [2024-07-25 13:13:04.339761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.204 [2024-07-25 13:13:04.344767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.204 [2024-07-25 13:13:04.345087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.204 [2024-07-25 13:13:04.345127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.204 [2024-07-25 13:13:04.350148] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.204 [2024-07-25 13:13:04.350455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.204 [2024-07-25 13:13:04.350480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.204 [2024-07-25 13:13:04.355677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.204 [2024-07-25 13:13:04.355976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.204 [2024-07-25 13:13:04.356000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.204 [2024-07-25 13:13:04.361111] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.204 [2024-07-25 13:13:04.361439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.204 [2024-07-25 13:13:04.361465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.204 [2024-07-25 13:13:04.366707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.204 [2024-07-25 13:13:04.367045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.204 [2024-07-25 13:13:04.367069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.204 [2024-07-25 13:13:04.372293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.204 [2024-07-25 13:13:04.372636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.204 [2024-07-25 13:13:04.372660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.204 [2024-07-25 13:13:04.377931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.204 [2024-07-25 13:13:04.378290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.204 [2024-07-25 13:13:04.378316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.204 [2024-07-25 13:13:04.383648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.204 [2024-07-25 13:13:04.383935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.204 [2024-07-25 13:13:04.383969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.204 [2024-07-25 13:13:04.389134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.204 [2024-07-25 13:13:04.389451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.204 [2024-07-25 13:13:04.389478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.464 [2024-07-25 13:13:04.394656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.464 [2024-07-25 13:13:04.394943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.464 [2024-07-25 13:13:04.394968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.464 [2024-07-25 13:13:04.400074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.464 [2024-07-25 13:13:04.400402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.464 [2024-07-25 13:13:04.400429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.464 [2024-07-25 13:13:04.405758] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.464 [2024-07-25 13:13:04.406054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.464 [2024-07-25 13:13:04.406079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.464 [2024-07-25 13:13:04.411161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.464 [2024-07-25 13:13:04.411555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.464 [2024-07-25 13:13:04.411597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.464 [2024-07-25 13:13:04.416591] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.464 [2024-07-25 13:13:04.416937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.464 [2024-07-25 13:13:04.416964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.464 [2024-07-25 13:13:04.421978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.464 [2024-07-25 13:13:04.422323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.464 [2024-07-25 13:13:04.422349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.464 [2024-07-25 13:13:04.427313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.464 [2024-07-25 13:13:04.427657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.464 [2024-07-25 13:13:04.427681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.464 [2024-07-25 13:13:04.432515] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.464 [2024-07-25 13:13:04.432856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.464 [2024-07-25 13:13:04.432881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.464 [2024-07-25 13:13:04.437734] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.464 [2024-07-25 13:13:04.438050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.464 [2024-07-25 13:13:04.438074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.464 [2024-07-25 13:13:04.442564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.464 [2024-07-25 13:13:04.442841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.464 [2024-07-25 13:13:04.442861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.464 [2024-07-25 13:13:04.447685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.464 [2024-07-25 13:13:04.448066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.465 [2024-07-25 13:13:04.448088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.465 [2024-07-25 13:13:04.453294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.465 [2024-07-25 13:13:04.453635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.465 [2024-07-25 13:13:04.453656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.465 [2024-07-25 13:13:04.458926] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.465 [2024-07-25 13:13:04.459266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.465 [2024-07-25 13:13:04.459287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.465 [2024-07-25 13:13:04.464392] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.465 [2024-07-25 13:13:04.464759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.465 [2024-07-25 13:13:04.464785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.465 [2024-07-25 13:13:04.469888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.465 [2024-07-25 13:13:04.470234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.465 [2024-07-25 13:13:04.470260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.465 [2024-07-25 13:13:04.475518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.465 [2024-07-25 13:13:04.475890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.465 [2024-07-25 13:13:04.475944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.465 [2024-07-25 13:13:04.481007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.465 [2024-07-25 13:13:04.481344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.465 [2024-07-25 13:13:04.481370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.465 [2024-07-25 13:13:04.486630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.465 [2024-07-25 13:13:04.486964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.465 [2024-07-25 13:13:04.486990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.465 [2024-07-25 13:13:04.492139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.465 [2024-07-25 13:13:04.492471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.465 [2024-07-25 13:13:04.492497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.465 [2024-07-25 13:13:04.497709] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.465 [2024-07-25 13:13:04.498048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.465 [2024-07-25 13:13:04.498074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.465 [2024-07-25 13:13:04.503320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.465 [2024-07-25 13:13:04.503643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.465 [2024-07-25 13:13:04.503668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.465 [2024-07-25 13:13:04.508872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.465 [2024-07-25 13:13:04.509205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.465 [2024-07-25 13:13:04.509248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.465 [2024-07-25 13:13:04.514376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.465 [2024-07-25 13:13:04.514713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.465 [2024-07-25 13:13:04.514737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.465 [2024-07-25 13:13:04.519930] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.465 [2024-07-25 13:13:04.520273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.465 [2024-07-25 13:13:04.520299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.465 [2024-07-25 13:13:04.525476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.465 [2024-07-25 13:13:04.525813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.465 [2024-07-25 13:13:04.525847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.465 [2024-07-25 13:13:04.530853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.465 [2024-07-25 13:13:04.531140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.465 [2024-07-25 13:13:04.531180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.465 [2024-07-25 13:13:04.536347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.465 [2024-07-25 13:13:04.536721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.465 [2024-07-25 13:13:04.536772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.465 [2024-07-25 13:13:04.541978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.465 [2024-07-25 13:13:04.542308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.465 [2024-07-25 13:13:04.542335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.465 [2024-07-25 13:13:04.547619] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.465 [2024-07-25 13:13:04.547912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.465 [2024-07-25 13:13:04.547964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.465 [2024-07-25 13:13:04.553176] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.465 [2024-07-25 13:13:04.553472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.465 [2024-07-25 13:13:04.553500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.465 [2024-07-25 13:13:04.558728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.465 [2024-07-25 13:13:04.559090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.465 [2024-07-25 13:13:04.559117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.465 [2024-07-25 13:13:04.564402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.465 [2024-07-25 13:13:04.564768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.465 [2024-07-25 13:13:04.564795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.465 [2024-07-25 13:13:04.570023] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.465 [2024-07-25 13:13:04.570346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.465 [2024-07-25 13:13:04.570373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.465 [2024-07-25 13:13:04.575695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.466 [2024-07-25 13:13:04.576005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.466 [2024-07-25 13:13:04.576040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.466 [2024-07-25 13:13:04.581334] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.466 [2024-07-25 13:13:04.581659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.466 [2024-07-25 13:13:04.581684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.466 [2024-07-25 13:13:04.586941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.466 [2024-07-25 13:13:04.587278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.466 [2024-07-25 13:13:04.587305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.466 [2024-07-25 13:13:04.592492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.466 [2024-07-25 13:13:04.592865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.466 [2024-07-25 13:13:04.592892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.466 [2024-07-25 13:13:04.597994] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.466 [2024-07-25 13:13:04.598335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.466 [2024-07-25 13:13:04.598361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.466 [2024-07-25 13:13:04.603639] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.466 [2024-07-25 13:13:04.603955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.466 [2024-07-25 13:13:04.603990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.466 [2024-07-25 13:13:04.609231] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.466 [2024-07-25 13:13:04.609522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.466 [2024-07-25 13:13:04.609565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.466 [2024-07-25 13:13:04.614817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.466 [2024-07-25 13:13:04.615179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.466 [2024-07-25 13:13:04.615205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.466 [2024-07-25 13:13:04.620465] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.466 [2024-07-25 13:13:04.620826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.466 [2024-07-25 13:13:04.620863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.466 [2024-07-25 13:13:04.626064] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.466 [2024-07-25 13:13:04.626378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.466 [2024-07-25 13:13:04.626406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.466 [2024-07-25 13:13:04.631826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.466 [2024-07-25 13:13:04.632164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.466 [2024-07-25 13:13:04.632191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.466 [2024-07-25 13:13:04.637490] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.466 [2024-07-25 13:13:04.637792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.466 [2024-07-25 13:13:04.637816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.466 [2024-07-25 13:13:04.643149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.466 [2024-07-25 13:13:04.643463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.466 [2024-07-25 13:13:04.643537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.466 [2024-07-25 13:13:04.648914] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.466 [2024-07-25 13:13:04.649234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.466 [2024-07-25 13:13:04.649261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.726 [2024-07-25 13:13:04.654614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.726 [2024-07-25 13:13:04.654925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.726 [2024-07-25 13:13:04.654947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.726 [2024-07-25 13:13:04.659920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.726 [2024-07-25 13:13:04.660243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.726 [2024-07-25 13:13:04.660264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.726 [2024-07-25 13:13:04.665459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.726 [2024-07-25 13:13:04.665817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.726 [2024-07-25 13:13:04.665852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.726 [2024-07-25 13:13:04.671116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.726 [2024-07-25 13:13:04.671423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.726 [2024-07-25 13:13:04.671465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.726 [2024-07-25 13:13:04.676784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.726 [2024-07-25 13:13:04.677095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.726 [2024-07-25 13:13:04.677123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.726 [2024-07-25 13:13:04.682427] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.726 [2024-07-25 13:13:04.682774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.726 [2024-07-25 13:13:04.682799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.726 [2024-07-25 13:13:04.688108] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.726 [2024-07-25 13:13:04.688402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.726 [2024-07-25 13:13:04.688429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.726 [2024-07-25 13:13:04.693618] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.726 [2024-07-25 13:13:04.693950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.726 [2024-07-25 13:13:04.693977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.726 [2024-07-25 13:13:04.699287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.726 [2024-07-25 13:13:04.699635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.726 [2024-07-25 13:13:04.699660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.726 [2024-07-25 13:13:04.704919] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.726 [2024-07-25 13:13:04.705214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.726 [2024-07-25 13:13:04.705240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.726 [2024-07-25 13:13:04.710453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.726 [2024-07-25 13:13:04.710768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.726 [2024-07-25 13:13:04.710793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.726 [2024-07-25 13:13:04.716017] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.726 [2024-07-25 13:13:04.716338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.726 [2024-07-25 13:13:04.716365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.726 [2024-07-25 13:13:04.721678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.726 [2024-07-25 13:13:04.721966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.726 [2024-07-25 13:13:04.722002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.726 [2024-07-25 13:13:04.727271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.727 [2024-07-25 13:13:04.727613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.727 [2024-07-25 13:13:04.727640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.727 [2024-07-25 13:13:04.733109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.727 [2024-07-25 13:13:04.733442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.727 [2024-07-25 13:13:04.733469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.727 [2024-07-25 13:13:04.738569] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.727 [2024-07-25 13:13:04.738865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.727 [2024-07-25 13:13:04.738912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.727 [2024-07-25 13:13:04.743886] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.727 [2024-07-25 13:13:04.744169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.727 [2024-07-25 13:13:04.744209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.727 [2024-07-25 13:13:04.749329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.727 [2024-07-25 13:13:04.749660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.727 [2024-07-25 13:13:04.749684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.727 [2024-07-25 13:13:04.754886] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.727 [2024-07-25 13:13:04.755255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.727 [2024-07-25 13:13:04.755282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.727 [2024-07-25 13:13:04.760455] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.727 [2024-07-25 13:13:04.760807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.727 [2024-07-25 13:13:04.760843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.727 [2024-07-25 13:13:04.766050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.727 [2024-07-25 13:13:04.766347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.727 [2024-07-25 13:13:04.766373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.727 [2024-07-25 13:13:04.771592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.727 [2024-07-25 13:13:04.771895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.727 [2024-07-25 13:13:04.771943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.727 [2024-07-25 13:13:04.777052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.727 [2024-07-25 13:13:04.777352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.727 [2024-07-25 13:13:04.777379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.727 [2024-07-25 13:13:04.782706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.727 [2024-07-25 13:13:04.783017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.727 [2024-07-25 13:13:04.783042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.727 [2024-07-25 13:13:04.788354] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.727 [2024-07-25 13:13:04.788680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.727 [2024-07-25 13:13:04.788705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.727 [2024-07-25 13:13:04.794086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.727 [2024-07-25 13:13:04.794406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.727 [2024-07-25 13:13:04.794433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.727 [2024-07-25 13:13:04.799654] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.727 [2024-07-25 13:13:04.799942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.727 [2024-07-25 13:13:04.799986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.727 [2024-07-25 13:13:04.805261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.727 [2024-07-25 13:13:04.805584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.727 [2024-07-25 13:13:04.805624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.727 [2024-07-25 13:13:04.810884] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.727 [2024-07-25 13:13:04.811208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.727 [2024-07-25 13:13:04.811236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.727 [2024-07-25 13:13:04.816427] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.727 [2024-07-25 13:13:04.816762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.727 [2024-07-25 13:13:04.816789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.727 [2024-07-25 13:13:04.821947] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.727 [2024-07-25 13:13:04.822255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.727 [2024-07-25 13:13:04.822282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.727 [2024-07-25 13:13:04.827419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.727 [2024-07-25 13:13:04.827752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.727 [2024-07-25 13:13:04.827776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.727 [2024-07-25 13:13:04.832913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.727 [2024-07-25 13:13:04.833249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.727 [2024-07-25 13:13:04.833276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.727 [2024-07-25 13:13:04.838509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.727 [2024-07-25 13:13:04.838923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.727 [2024-07-25 13:13:04.838960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.727 [2024-07-25 13:13:04.844038] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.727 [2024-07-25 13:13:04.844363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.727 [2024-07-25 13:13:04.844390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.727 [2024-07-25 13:13:04.849386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.727 [2024-07-25 13:13:04.849701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.727 [2024-07-25 13:13:04.849725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.727 [2024-07-25 13:13:04.854604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.727 [2024-07-25 13:13:04.854866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.727 [2024-07-25 13:13:04.854917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.727 [2024-07-25 13:13:04.859737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.727 [2024-07-25 13:13:04.860031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.727 [2024-07-25 13:13:04.860055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.728 [2024-07-25 13:13:04.865031] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.728 [2024-07-25 13:13:04.865357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.728 [2024-07-25 13:13:04.865381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.728 [2024-07-25 13:13:04.870217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.728 [2024-07-25 13:13:04.870480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.728 [2024-07-25 13:13:04.870504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.728 [2024-07-25 13:13:04.874995] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.728 [2024-07-25 13:13:04.875297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.728 [2024-07-25 13:13:04.875320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.728 [2024-07-25 13:13:04.879960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.728 [2024-07-25 13:13:04.880260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.728 [2024-07-25 13:13:04.880285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.728 [2024-07-25 13:13:04.885625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.728 [2024-07-25 13:13:04.885935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.728 [2024-07-25 13:13:04.885970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.728 [2024-07-25 13:13:04.891038] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.728 [2024-07-25 13:13:04.891373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.728 [2024-07-25 13:13:04.891398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.728 [2024-07-25 13:13:04.896519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.728 [2024-07-25 13:13:04.896898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.728 [2024-07-25 13:13:04.896925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.728 [2024-07-25 13:13:04.902096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.728 [2024-07-25 13:13:04.902445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.728 [2024-07-25 13:13:04.902472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.728 [2024-07-25 13:13:04.907750] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.728 [2024-07-25 13:13:04.908042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.728 [2024-07-25 13:13:04.908066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.728 [2024-07-25 13:13:04.913241] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.728 [2024-07-25 13:13:04.913538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.728 [2024-07-25 13:13:04.913578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.987 [2024-07-25 13:13:04.918782] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.987 [2024-07-25 13:13:04.919095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.987 [2024-07-25 13:13:04.919119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.987 [2024-07-25 13:13:04.924208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.987 [2024-07-25 13:13:04.924509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.987 [2024-07-25 13:13:04.924536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.988 [2024-07-25 13:13:04.929771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.988 [2024-07-25 13:13:04.930052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.988 [2024-07-25 13:13:04.930076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.988 [2024-07-25 13:13:04.935338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.988 [2024-07-25 13:13:04.935685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.988 [2024-07-25 13:13:04.935742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.988 [2024-07-25 13:13:04.940911] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.988 [2024-07-25 13:13:04.941262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.988 [2024-07-25 13:13:04.941288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.988 [2024-07-25 13:13:04.946466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.988 [2024-07-25 13:13:04.946765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.988 [2024-07-25 13:13:04.946792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.988 [2024-07-25 13:13:04.951900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.988 [2024-07-25 13:13:04.952250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.988 [2024-07-25 13:13:04.952276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.988 [2024-07-25 13:13:04.957539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.988 [2024-07-25 13:13:04.957864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.988 [2024-07-25 13:13:04.957898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.988 [2024-07-25 13:13:04.962955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.988 [2024-07-25 13:13:04.963278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.988 [2024-07-25 13:13:04.963305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.988 [2024-07-25 13:13:04.968333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.988 [2024-07-25 13:13:04.968665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.988 [2024-07-25 13:13:04.968706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.988 [2024-07-25 13:13:04.973938] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.988 [2024-07-25 13:13:04.974286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.988 [2024-07-25 13:13:04.974313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.988 [2024-07-25 13:13:04.979602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.988 [2024-07-25 13:13:04.979904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.988 [2024-07-25 13:13:04.979938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.988 [2024-07-25 13:13:04.985093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.988 [2024-07-25 13:13:04.985424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.988 [2024-07-25 13:13:04.985451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.988 [2024-07-25 13:13:04.990650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.988 [2024-07-25 13:13:04.990959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.988 [2024-07-25 13:13:04.990994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.988 [2024-07-25 13:13:04.996285] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.988 [2024-07-25 13:13:04.996606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.988 [2024-07-25 13:13:04.996631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.988 [2024-07-25 13:13:05.001803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.988 [2024-07-25 13:13:05.002097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.988 [2024-07-25 13:13:05.002121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.988 [2024-07-25 13:13:05.007376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.988 [2024-07-25 13:13:05.007734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.988 [2024-07-25 13:13:05.007759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.988 [2024-07-25 13:13:05.012871] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.988 [2024-07-25 13:13:05.013224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.988 [2024-07-25 13:13:05.013251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.988 [2024-07-25 13:13:05.018381] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.988 [2024-07-25 13:13:05.018727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.988 [2024-07-25 13:13:05.018752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.988 [2024-07-25 13:13:05.024158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.988 [2024-07-25 13:13:05.024471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.988 [2024-07-25 13:13:05.024497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.988 [2024-07-25 13:13:05.029868] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.988 [2024-07-25 13:13:05.030208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.988 [2024-07-25 13:13:05.030251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.988 [2024-07-25 13:13:05.035433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.988 [2024-07-25 13:13:05.035768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.988 [2024-07-25 13:13:05.035809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.988 [2024-07-25 13:13:05.041103] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.988 [2024-07-25 13:13:05.041430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.988 [2024-07-25 13:13:05.041457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.988 [2024-07-25 13:13:05.046647] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.988 [2024-07-25 13:13:05.046961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.988 [2024-07-25 13:13:05.046997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.988 [2024-07-25 13:13:05.052087] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.988 [2024-07-25 13:13:05.052408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.988 [2024-07-25 13:13:05.052435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.988 [2024-07-25 13:13:05.057657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.988 [2024-07-25 13:13:05.057960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.988 [2024-07-25 13:13:05.057993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.988 [2024-07-25 13:13:05.063007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.988 [2024-07-25 13:13:05.063314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.988 [2024-07-25 13:13:05.063339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.988 [2024-07-25 13:13:05.068274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.989 [2024-07-25 13:13:05.068553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.989 [2024-07-25 13:13:05.068593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.989 [2024-07-25 13:13:05.073465] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.989 [2024-07-25 13:13:05.073746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.989 [2024-07-25 13:13:05.073770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.989 [2024-07-25 13:13:05.078496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.989 [2024-07-25 13:13:05.078778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.989 [2024-07-25 13:13:05.078802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.989 [2024-07-25 13:13:05.083607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.989 [2024-07-25 13:13:05.083908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.989 [2024-07-25 13:13:05.083929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.989 [2024-07-25 13:13:05.088375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.989 [2024-07-25 13:13:05.088692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.989 [2024-07-25 13:13:05.088712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.989 [2024-07-25 13:13:05.093151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.989 [2024-07-25 13:13:05.093413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.989 [2024-07-25 13:13:05.093437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.989 [2024-07-25 13:13:05.098694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.989 [2024-07-25 13:13:05.099057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.989 [2024-07-25 13:13:05.099084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.989 [2024-07-25 13:13:05.104252] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.989 [2024-07-25 13:13:05.104571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.989 [2024-07-25 13:13:05.104612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.989 [2024-07-25 13:13:05.109910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.989 [2024-07-25 13:13:05.110252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.989 [2024-07-25 13:13:05.110279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.989 [2024-07-25 13:13:05.115464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.989 [2024-07-25 13:13:05.115801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.989 [2024-07-25 13:13:05.115827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.989 [2024-07-25 13:13:05.121158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.989 [2024-07-25 13:13:05.121493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.989 [2024-07-25 13:13:05.121520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.989 [2024-07-25 13:13:05.126737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.989 [2024-07-25 13:13:05.127094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.989 [2024-07-25 13:13:05.127120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.989 [2024-07-25 13:13:05.132191] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.989 [2024-07-25 13:13:05.132483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.989 [2024-07-25 13:13:05.132510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.989 [2024-07-25 13:13:05.137784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.989 [2024-07-25 13:13:05.138135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.989 [2024-07-25 13:13:05.138161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.989 [2024-07-25 13:13:05.143139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.989 [2024-07-25 13:13:05.143433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.989 [2024-07-25 13:13:05.143454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.989 [2024-07-25 13:13:05.148347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.989 [2024-07-25 13:13:05.148643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.989 [2024-07-25 13:13:05.148671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:12.989 [2024-07-25 13:13:05.153769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.989 [2024-07-25 13:13:05.154104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.989 [2024-07-25 13:13:05.154132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:12.989 [2024-07-25 13:13:05.159375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.989 [2024-07-25 13:13:05.159702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.989 [2024-07-25 13:13:05.159728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.989 [2024-07-25 13:13:05.165011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.989 [2024-07-25 13:13:05.165319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.989 [2024-07-25 13:13:05.165345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:12.989 [2024-07-25 13:13:05.170662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:12.989 [2024-07-25 13:13:05.171005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.989 [2024-07-25 13:13:05.171032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:13.249 [2024-07-25 13:13:05.176169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.249 [2024-07-25 13:13:05.176462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.250 [2024-07-25 13:13:05.176490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:13.250 [2024-07-25 13:13:05.181737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.250 [2024-07-25 13:13:05.182102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.250 [2024-07-25 13:13:05.182129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.250 [2024-07-25 13:13:05.187315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.250 [2024-07-25 13:13:05.187610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.250 [2024-07-25 13:13:05.187637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:13.250 [2024-07-25 13:13:05.192723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.250 [2024-07-25 13:13:05.193055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.250 [2024-07-25 13:13:05.193086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:13.250 [2024-07-25 13:13:05.198097] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.250 [2024-07-25 13:13:05.198394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.250 [2024-07-25 13:13:05.198421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:13.250 [2024-07-25 13:13:05.203483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.250 [2024-07-25 13:13:05.203822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.250 [2024-07-25 13:13:05.203856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.250 [2024-07-25 13:13:05.209080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.250 [2024-07-25 13:13:05.209373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.250 [2024-07-25 13:13:05.209400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:13.250 [2024-07-25 13:13:05.214626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.250 [2024-07-25 13:13:05.214994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.250 [2024-07-25 13:13:05.215021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:13.250 [2024-07-25 13:13:05.220298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.250 [2024-07-25 13:13:05.220630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.250 [2024-07-25 13:13:05.220655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:13.250 [2024-07-25 13:13:05.225833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.250 [2024-07-25 13:13:05.226158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.250 [2024-07-25 13:13:05.226185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.250 [2024-07-25 13:13:05.231356] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.250 [2024-07-25 13:13:05.231677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.250 [2024-07-25 13:13:05.231703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:13.250 [2024-07-25 13:13:05.236919] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.250 [2024-07-25 13:13:05.237218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.250 [2024-07-25 13:13:05.237245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:13.250 [2024-07-25 13:13:05.242414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.250 [2024-07-25 13:13:05.242728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.250 [2024-07-25 13:13:05.242754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:13.250 [2024-07-25 13:13:05.248012] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.250 [2024-07-25 13:13:05.248342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.250 [2024-07-25 13:13:05.248368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.250 [2024-07-25 13:13:05.253605] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.250 [2024-07-25 13:13:05.253900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.250 [2024-07-25 13:13:05.253920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:13.250 [2024-07-25 13:13:05.259098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.250 [2024-07-25 13:13:05.259391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.250 [2024-07-25 13:13:05.259423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:13.250 [2024-07-25 13:13:05.264579] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.250 [2024-07-25 13:13:05.264915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.250 [2024-07-25 13:13:05.264941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:13.250 [2024-07-25 13:13:05.270132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.250 [2024-07-25 13:13:05.270425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.250 [2024-07-25 13:13:05.270452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.250 [2024-07-25 13:13:05.275751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.250 [2024-07-25 13:13:05.276041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.250 [2024-07-25 13:13:05.276098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:13.250 [2024-07-25 13:13:05.281368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.250 [2024-07-25 13:13:05.281704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.250 [2024-07-25 13:13:05.281746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:13.250 [2024-07-25 13:13:05.286984] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.250 [2024-07-25 13:13:05.287332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.250 [2024-07-25 13:13:05.287359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:13.250 [2024-07-25 13:13:05.292390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.250 [2024-07-25 13:13:05.292704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.250 [2024-07-25 13:13:05.292753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.250 [2024-07-25 13:13:05.297710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.250 [2024-07-25 13:13:05.298014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.250 [2024-07-25 13:13:05.298038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:13.250 [2024-07-25 13:13:05.302779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.250 [2024-07-25 13:13:05.303123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.250 [2024-07-25 13:13:05.303150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:13.250 [2024-07-25 13:13:05.308405] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.250 [2024-07-25 13:13:05.308725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.250 [2024-07-25 13:13:05.308778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:13.250 [2024-07-25 13:13:05.313945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.251 [2024-07-25 13:13:05.314251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.251 [2024-07-25 13:13:05.314285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.251 [2024-07-25 13:13:05.319667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.251 [2024-07-25 13:13:05.319999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.251 [2024-07-25 13:13:05.320025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:13.251 [2024-07-25 13:13:05.325342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.251 [2024-07-25 13:13:05.325712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.251 [2024-07-25 13:13:05.325737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:13.251 [2024-07-25 13:13:05.331086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.251 [2024-07-25 13:13:05.331384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.251 [2024-07-25 13:13:05.331411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:13.251 [2024-07-25 13:13:05.336488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.251 [2024-07-25 13:13:05.336822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.251 [2024-07-25 13:13:05.336857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.251 [2024-07-25 13:13:05.341961] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.251 [2024-07-25 13:13:05.342286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.251 [2024-07-25 13:13:05.342313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:13.251 [2024-07-25 13:13:05.347498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.251 [2024-07-25 13:13:05.347832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.251 [2024-07-25 13:13:05.347866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:13.251 [2024-07-25 13:13:05.353086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.251 [2024-07-25 13:13:05.353377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.251 [2024-07-25 13:13:05.353403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:13.251 [2024-07-25 13:13:05.358651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.251 [2024-07-25 13:13:05.358968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.251 [2024-07-25 13:13:05.358993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.251 [2024-07-25 13:13:05.364422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.251 [2024-07-25 13:13:05.364778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.251 [2024-07-25 13:13:05.364805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:13.251 [2024-07-25 13:13:05.370076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.251 [2024-07-25 13:13:05.370385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.251 [2024-07-25 13:13:05.370411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:13.251 [2024-07-25 13:13:05.375444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.251 [2024-07-25 13:13:05.375737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.251 [2024-07-25 13:13:05.375762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:13.251 [2024-07-25 13:13:05.381174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.251 [2024-07-25 13:13:05.381475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.251 [2024-07-25 13:13:05.381502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.251 [2024-07-25 13:13:05.386772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.251 [2024-07-25 13:13:05.387141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.251 [2024-07-25 13:13:05.387167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:13.251 [2024-07-25 13:13:05.392400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.251 [2024-07-25 13:13:05.392716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.251 [2024-07-25 13:13:05.392769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:13.251 [2024-07-25 13:13:05.398045] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.251 [2024-07-25 13:13:05.398387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.251 [2024-07-25 13:13:05.398414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:13.251 [2024-07-25 13:13:05.403737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.251 [2024-07-25 13:13:05.404126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.251 [2024-07-25 13:13:05.404153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.251 [2024-07-25 13:13:05.409365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.251 [2024-07-25 13:13:05.409733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.251 [2024-07-25 13:13:05.409758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:13.251 [2024-07-25 13:13:05.415166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.251 [2024-07-25 13:13:05.415459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.251 [2024-07-25 13:13:05.415501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:13.251 [2024-07-25 13:13:05.420808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.251 [2024-07-25 13:13:05.421120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.251 [2024-07-25 13:13:05.421146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:13.251 [2024-07-25 13:13:05.426413] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.251 [2024-07-25 13:13:05.426745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.251 [2024-07-25 13:13:05.426769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:13.251 [2024-07-25 13:13:05.432343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.251 [2024-07-25 13:13:05.432706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.252 [2024-07-25 13:13:05.432755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:13.252 [2024-07-25 13:13:05.438133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11aa080) with pdu=0x2000190fef90 00:17:13.510 [2024-07-25 13:13:05.438428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.510 [2024-07-25 13:13:05.438455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:13.510 00:17:13.510 Latency(us) 00:17:13.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.510 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:13.510 nvme0n1 : 2.00 5613.53 701.69 0.00 0.00 2843.90 2115.03 9532.51 00:17:13.510 =================================================================================================================== 00:17:13.510 Total : 5613.53 701.69 0.00 0.00 2843.90 2115.03 9532.51 00:17:13.510 0 00:17:13.510 13:13:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:13.510 13:13:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:13.510 13:13:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:13.510 | .driver_specific 00:17:13.510 | .nvme_error 00:17:13.510 | .status_code 00:17:13.510 | .command_transient_transport_error' 00:17:13.510 13:13:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:13.768 13:13:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 362 > 0 )) 00:17:13.768 13:13:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79597 00:17:13.768 13:13:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79597 ']' 00:17:13.768 13:13:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79597 00:17:13.768 13:13:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:17:13.768 13:13:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:13.768 13:13:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79597 00:17:13.768 13:13:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:13.768 killing process with pid 79597 00:17:13.768 13:13:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:13.768 13:13:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79597' 00:17:13.768 Received shutdown signal, test time was about 2.000000 seconds 00:17:13.768 00:17:13.768 Latency(us) 00:17:13.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.768 =================================================================================================================== 00:17:13.768 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:13.768 13:13:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79597 00:17:13.768 13:13:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79597 00:17:14.026 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 79391 00:17:14.026 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79391 ']' 00:17:14.026 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79391 00:17:14.026 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:17:14.026 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:14.026 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79391 00:17:14.026 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:14.026 killing process with pid 79391 00:17:14.026 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:14.026 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79391' 00:17:14.026 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79391 00:17:14.026 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79391 00:17:14.286 00:17:14.286 real 0m18.447s 00:17:14.286 user 0m35.602s 00:17:14.286 sys 0m4.769s 00:17:14.286 ************************************ 00:17:14.286 END TEST nvmf_digest_error 00:17:14.286 ************************************ 00:17:14.286 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:14.286 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:14.286 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:14.286 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:17:14.286 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:14.286 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:17:14.286 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:14.286 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:17:14.286 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:14.286 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:14.286 rmmod nvme_tcp 00:17:14.286 rmmod nvme_fabrics 00:17:14.286 rmmod nvme_keyring 00:17:14.286 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:14.286 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:17:14.286 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:17:14.286 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 79391 ']' 00:17:14.286 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 79391 00:17:14.286 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 79391 ']' 00:17:14.286 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 79391 00:17:14.286 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (79391) - No such process 00:17:14.286 Process with pid 79391 is not found 00:17:14.286 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 79391 is not found' 00:17:14.286 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:14.286 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:14.286 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:14.286 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:14.286 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:14.286 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.286 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:14.286 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.286 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:14.286 00:17:14.286 real 0m38.127s 00:17:14.286 user 1m12.321s 00:17:14.286 sys 0m10.005s 00:17:14.286 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:14.286 13:13:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:14.286 ************************************ 00:17:14.286 END TEST nvmf_digest 00:17:14.286 ************************************ 00:17:14.545 13:13:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:17:14.545 13:13:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:17:14.545 13:13:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:14.545 13:13:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:14.545 13:13:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:14.545 13:13:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.545 ************************************ 00:17:14.545 START TEST nvmf_host_multipath 00:17:14.545 ************************************ 00:17:14.545 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:14.545 * Looking for test storage... 00:17:14.545 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:14.545 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:14.545 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:14.545 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:14.545 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:14.545 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:14.545 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:14.545 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:14.545 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:14.545 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:14.545 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:14.545 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:14.546 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:14.547 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:14.547 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:14.547 Cannot find device "nvmf_tgt_br" 00:17:14.547 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:17:14.547 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:14.547 Cannot find device "nvmf_tgt_br2" 00:17:14.547 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:17:14.547 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:14.547 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:14.547 Cannot find device "nvmf_tgt_br" 00:17:14.547 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:17:14.547 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:14.547 Cannot find device "nvmf_tgt_br2" 00:17:14.547 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:17:14.547 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:14.805 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:14.805 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:14.805 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:14.805 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:17:14.805 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:14.805 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:14.805 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:17:14.805 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:14.805 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:14.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:14.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:17:14.806 00:17:14.806 --- 10.0.0.2 ping statistics --- 00:17:14.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.806 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:14.806 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:14.806 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:17:14.806 00:17:14.806 --- 10.0.0.3 ping statistics --- 00:17:14.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.806 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:14.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:14.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:14.806 00:17:14.806 --- 10.0.0.1 ping statistics --- 00:17:14.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.806 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:14.806 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:15.064 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=79857 00:17:15.064 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:15.064 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 79857 00:17:15.064 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 79857 ']' 00:17:15.064 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.064 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:15.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.065 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.065 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:15.065 13:13:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:15.065 [2024-07-25 13:13:07.054887] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:15.065 [2024-07-25 13:13:07.054978] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:15.065 [2024-07-25 13:13:07.195189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:15.322 [2024-07-25 13:13:07.316630] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:15.323 [2024-07-25 13:13:07.316692] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:15.323 [2024-07-25 13:13:07.316704] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:15.323 [2024-07-25 13:13:07.316713] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:15.323 [2024-07-25 13:13:07.316720] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:15.323 [2024-07-25 13:13:07.316911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.323 [2024-07-25 13:13:07.317242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.323 [2024-07-25 13:13:07.375856] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:16.258 13:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:16.258 13:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:17:16.258 13:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:16.258 13:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:16.258 13:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:16.258 13:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.258 13:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=79857 00:17:16.258 13:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:16.258 [2024-07-25 13:13:08.411992] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:16.258 13:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:16.824 Malloc0 00:17:16.824 13:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:16.824 13:13:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:17.391 13:13:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:17.391 [2024-07-25 13:13:09.504341] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:17.391 13:13:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:17.649 [2024-07-25 13:13:09.772498] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:17.649 13:13:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=79913 00:17:17.649 13:13:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:17.649 13:13:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:17.649 13:13:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 79913 /var/tmp/bdevperf.sock 00:17:17.649 13:13:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 79913 ']' 00:17:17.649 13:13:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:17.649 13:13:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:17.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:17.649 13:13:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:17.649 13:13:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:17.649 13:13:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:18.584 13:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:18.584 13:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:17:18.584 13:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:18.843 13:13:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:17:19.409 Nvme0n1 00:17:19.409 13:13:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:19.668 Nvme0n1 00:17:19.668 13:13:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:19.668 13:13:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:17:20.603 13:13:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:17:20.603 13:13:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:20.861 13:13:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:21.120 13:13:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:17:21.120 13:13:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=79958 00:17:21.120 13:13:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:21.120 13:13:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79857 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:27.685 13:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:27.685 13:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:27.685 13:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:27.685 13:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:27.685 Attaching 4 probes... 00:17:27.685 @path[10.0.0.2, 4421]: 17408 00:17:27.685 @path[10.0.0.2, 4421]: 16360 00:17:27.685 @path[10.0.0.2, 4421]: 15944 00:17:27.685 @path[10.0.0.2, 4421]: 15762 00:17:27.685 @path[10.0.0.2, 4421]: 16257 00:17:27.685 13:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:27.685 13:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:27.685 13:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:27.685 13:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:27.685 13:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:27.685 13:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:27.685 13:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 79958 00:17:27.685 13:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:27.685 13:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:17:27.685 13:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:27.685 13:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:27.943 13:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:17:27.943 13:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80076 00:17:27.943 13:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79857 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:27.943 13:13:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:34.504 13:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:34.504 13:13:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:34.504 13:13:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:17:34.504 13:13:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:34.504 Attaching 4 probes... 00:17:34.504 @path[10.0.0.2, 4420]: 16773 00:17:34.504 @path[10.0.0.2, 4420]: 17299 00:17:34.504 @path[10.0.0.2, 4420]: 16672 00:17:34.504 @path[10.0.0.2, 4420]: 15864 00:17:34.504 @path[10.0.0.2, 4420]: 17176 00:17:34.504 13:13:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:34.504 13:13:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:34.505 13:13:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:34.505 13:13:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:17:34.505 13:13:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:34.505 13:13:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:34.505 13:13:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80076 00:17:34.505 13:13:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:34.505 13:13:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:17:34.505 13:13:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:34.505 13:13:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:34.764 13:13:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:17:34.764 13:13:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80189 00:17:34.764 13:13:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79857 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:34.764 13:13:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:41.321 13:13:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:41.321 13:13:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:41.321 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:41.321 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:41.321 Attaching 4 probes... 00:17:41.321 @path[10.0.0.2, 4421]: 13469 00:17:41.321 @path[10.0.0.2, 4421]: 17897 00:17:41.321 @path[10.0.0.2, 4421]: 17894 00:17:41.321 @path[10.0.0.2, 4421]: 17968 00:17:41.321 @path[10.0.0.2, 4421]: 17942 00:17:41.321 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:41.321 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:41.321 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:41.321 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:41.321 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:41.321 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:41.321 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80189 00:17:41.321 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:41.321 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:17:41.321 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:41.321 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:41.579 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:17:41.579 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80301 00:17:41.579 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79857 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:41.579 13:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:48.156 13:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:48.156 13:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:17:48.156 13:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:17:48.156 13:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:48.156 Attaching 4 probes... 00:17:48.156 00:17:48.156 00:17:48.156 00:17:48.156 00:17:48.156 00:17:48.156 13:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:48.156 13:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:48.156 13:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:48.156 13:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:17:48.156 13:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:17:48.156 13:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:17:48.156 13:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80301 00:17:48.156 13:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:48.156 13:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:17:48.156 13:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:48.156 13:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:48.415 13:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:17:48.415 13:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80419 00:17:48.415 13:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:48.415 13:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79857 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:54.972 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:54.973 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:54.973 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:54.973 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:54.973 Attaching 4 probes... 00:17:54.973 @path[10.0.0.2, 4421]: 17301 00:17:54.973 @path[10.0.0.2, 4421]: 16817 00:17:54.973 @path[10.0.0.2, 4421]: 16184 00:17:54.973 @path[10.0.0.2, 4421]: 16128 00:17:54.973 @path[10.0.0.2, 4421]: 16115 00:17:54.973 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:54.973 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:54.973 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:54.973 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:54.973 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:54.973 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:54.973 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80419 00:17:54.973 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:54.973 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:54.973 13:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:17:55.904 13:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:17:55.904 13:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80537 00:17:55.904 13:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79857 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:55.904 13:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:02.458 13:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:02.458 13:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:02.458 13:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:02.458 13:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:02.458 Attaching 4 probes... 00:18:02.458 @path[10.0.0.2, 4420]: 15808 00:18:02.458 @path[10.0.0.2, 4420]: 15863 00:18:02.458 @path[10.0.0.2, 4420]: 15903 00:18:02.458 @path[10.0.0.2, 4420]: 15877 00:18:02.458 @path[10.0.0.2, 4420]: 15834 00:18:02.458 13:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:02.458 13:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:02.458 13:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:02.458 13:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:02.458 13:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:02.458 13:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:02.458 13:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80537 00:18:02.458 13:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:02.458 13:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:02.716 [2024-07-25 13:13:54.688703] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:02.716 13:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:02.975 13:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:18:09.570 13:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:18:09.570 13:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80712 00:18:09.570 13:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79857 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:09.570 13:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:14.840 13:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:14.840 13:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:15.421 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:15.421 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:15.421 Attaching 4 probes... 00:18:15.421 @path[10.0.0.2, 4421]: 15815 00:18:15.421 @path[10.0.0.2, 4421]: 16032 00:18:15.421 @path[10.0.0.2, 4421]: 16049 00:18:15.421 @path[10.0.0.2, 4421]: 16197 00:18:15.421 @path[10.0.0.2, 4421]: 16321 00:18:15.421 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:15.421 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:15.421 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:15.421 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:15.421 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:15.421 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:15.421 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80712 00:18:15.421 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:15.421 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 79913 00:18:15.421 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 79913 ']' 00:18:15.421 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 79913 00:18:15.421 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:18:15.421 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:15.421 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79913 00:18:15.421 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:15.421 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:15.421 killing process with pid 79913 00:18:15.421 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79913' 00:18:15.421 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 79913 00:18:15.421 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 79913 00:18:15.421 Connection closed with partial response: 00:18:15.421 00:18:15.421 00:18:15.421 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 79913 00:18:15.421 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:15.421 [2024-07-25 13:13:09.838684] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:15.421 [2024-07-25 13:13:09.838793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79913 ] 00:18:15.421 [2024-07-25 13:13:09.976980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.421 [2024-07-25 13:13:10.105103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:15.421 [2024-07-25 13:13:10.168219] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:15.421 Running I/O for 90 seconds... 00:18:15.421 [2024-07-25 13:13:19.925659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.421 [2024-07-25 13:13:19.925752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:15.421 [2024-07-25 13:13:19.925816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.421 [2024-07-25 13:13:19.925850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:15.421 [2024-07-25 13:13:19.925877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.421 [2024-07-25 13:13:19.925893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:15.421 [2024-07-25 13:13:19.925916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.421 [2024-07-25 13:13:19.925931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:15.421 [2024-07-25 13:13:19.925952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.421 [2024-07-25 13:13:19.925968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:15.421 [2024-07-25 13:13:19.925999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.421 [2024-07-25 13:13:19.926015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:15.421 [2024-07-25 13:13:19.926037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.421 [2024-07-25 13:13:19.926052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:15.421 [2024-07-25 13:13:19.926073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.421 [2024-07-25 13:13:19.926088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:15.421 [2024-07-25 13:13:19.926109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.421 [2024-07-25 13:13:19.926124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:15.421 [2024-07-25 13:13:19.926146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.421 [2024-07-25 13:13:19.926161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:15.421 [2024-07-25 13:13:19.926182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.421 [2024-07-25 13:13:19.926218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:15.421 [2024-07-25 13:13:19.926243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.421 [2024-07-25 13:13:19.926258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.926280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.422 [2024-07-25 13:13:19.926295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.926316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.422 [2024-07-25 13:13:19.926331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.926353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.422 [2024-07-25 13:13:19.926368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.926389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.422 [2024-07-25 13:13:19.926404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.926426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.422 [2024-07-25 13:13:19.926440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.926463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.422 [2024-07-25 13:13:19.926478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.926500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.422 [2024-07-25 13:13:19.926514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.926536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.422 [2024-07-25 13:13:19.926551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.926572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.422 [2024-07-25 13:13:19.926586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.926608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.422 [2024-07-25 13:13:19.926623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.926644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.422 [2024-07-25 13:13:19.926667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.926690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.422 [2024-07-25 13:13:19.926706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.926943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.422 [2024-07-25 13:13:19.926973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.926998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.422 [2024-07-25 13:13:19.927014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.927040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.422 [2024-07-25 13:13:19.927055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.927077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.422 [2024-07-25 13:13:19.927092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.927114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.422 [2024-07-25 13:13:19.927130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.927152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.422 [2024-07-25 13:13:19.927167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.927189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.422 [2024-07-25 13:13:19.927204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.927225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.422 [2024-07-25 13:13:19.927241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.927262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.422 [2024-07-25 13:13:19.927279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.927303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.422 [2024-07-25 13:13:19.927318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.927341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.422 [2024-07-25 13:13:19.927366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.927398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.422 [2024-07-25 13:13:19.927415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.927436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.422 [2024-07-25 13:13:19.927451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.927474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.422 [2024-07-25 13:13:19.927489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.927510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.422 [2024-07-25 13:13:19.927525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.927547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.422 [2024-07-25 13:13:19.927562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.927584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.422 [2024-07-25 13:13:19.927599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.927621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.422 [2024-07-25 13:13:19.927637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.927659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.422 [2024-07-25 13:13:19.927674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.927696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.422 [2024-07-25 13:13:19.927711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.927733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.422 [2024-07-25 13:13:19.927748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.927770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.422 [2024-07-25 13:13:19.927785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.927807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.422 [2024-07-25 13:13:19.927822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:15.422 [2024-07-25 13:13:19.927869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.423 [2024-07-25 13:13:19.927896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.927942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.423 [2024-07-25 13:13:19.927962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.927986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.423 [2024-07-25 13:13:19.928002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.928024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.423 [2024-07-25 13:13:19.928039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.928061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.423 [2024-07-25 13:13:19.928076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.928098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.423 [2024-07-25 13:13:19.928114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.928136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.423 [2024-07-25 13:13:19.928151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.928172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.423 [2024-07-25 13:13:19.928187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.928209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.423 [2024-07-25 13:13:19.928224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.928246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.423 [2024-07-25 13:13:19.928261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.928283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.423 [2024-07-25 13:13:19.928298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.928321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.423 [2024-07-25 13:13:19.928336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.928358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.423 [2024-07-25 13:13:19.928382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.928405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.423 [2024-07-25 13:13:19.928420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.928442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.423 [2024-07-25 13:13:19.928457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.928479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.423 [2024-07-25 13:13:19.928494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.928515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.423 [2024-07-25 13:13:19.928530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.928552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.423 [2024-07-25 13:13:19.928567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.928590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.423 [2024-07-25 13:13:19.928605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.928627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.423 [2024-07-25 13:13:19.928642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.928664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.423 [2024-07-25 13:13:19.928678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.928700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.423 [2024-07-25 13:13:19.928717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.928749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.423 [2024-07-25 13:13:19.928767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.928790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.423 [2024-07-25 13:13:19.928805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.928827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.423 [2024-07-25 13:13:19.928865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.928890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.423 [2024-07-25 13:13:19.928906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.928929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.423 [2024-07-25 13:13:19.928944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.928966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.423 [2024-07-25 13:13:19.928981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.929003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.423 [2024-07-25 13:13:19.929018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.929040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.423 [2024-07-25 13:13:19.929055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.929077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.423 [2024-07-25 13:13:19.929091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.929113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.423 [2024-07-25 13:13:19.929129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.929151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.423 [2024-07-25 13:13:19.929166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.929187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.423 [2024-07-25 13:13:19.929202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.929225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.423 [2024-07-25 13:13:19.929240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.929262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.423 [2024-07-25 13:13:19.929277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:15.423 [2024-07-25 13:13:19.929304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.423 [2024-07-25 13:13:19.929325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.929349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.424 [2024-07-25 13:13:19.929364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.929386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.424 [2024-07-25 13:13:19.929401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.929423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.424 [2024-07-25 13:13:19.929437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.929459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.424 [2024-07-25 13:13:19.929474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.929496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.424 [2024-07-25 13:13:19.929511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.929533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.424 [2024-07-25 13:13:19.929558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.929580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.424 [2024-07-25 13:13:19.929595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.929617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.424 [2024-07-25 13:13:19.929632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.929654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.424 [2024-07-25 13:13:19.929669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.929690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.424 [2024-07-25 13:13:19.929705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.929727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.424 [2024-07-25 13:13:19.929742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.929764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.424 [2024-07-25 13:13:19.929778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.929807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.424 [2024-07-25 13:13:19.929822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.929865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.424 [2024-07-25 13:13:19.929883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.929906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.424 [2024-07-25 13:13:19.929921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.929943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.424 [2024-07-25 13:13:19.929957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.929979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.424 [2024-07-25 13:13:19.929998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.930019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.424 [2024-07-25 13:13:19.930034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.930056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.424 [2024-07-25 13:13:19.930072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.931653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.424 [2024-07-25 13:13:19.931686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.931716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.424 [2024-07-25 13:13:19.931734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.931756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.424 [2024-07-25 13:13:19.931772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.931795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.424 [2024-07-25 13:13:19.931810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.931849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.424 [2024-07-25 13:13:19.931869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.931905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.424 [2024-07-25 13:13:19.931922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.931944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.424 [2024-07-25 13:13:19.931959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.931982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.424 [2024-07-25 13:13:19.931997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.932193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.424 [2024-07-25 13:13:19.932219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.932245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.424 [2024-07-25 13:13:19.932262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.932285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.424 [2024-07-25 13:13:19.932300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.932322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.424 [2024-07-25 13:13:19.932337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.932359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.424 [2024-07-25 13:13:19.932375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.932397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.424 [2024-07-25 13:13:19.932412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.932433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.424 [2024-07-25 13:13:19.932448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.932470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.424 [2024-07-25 13:13:19.932485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:15.424 [2024-07-25 13:13:19.932512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.424 [2024-07-25 13:13:19.932528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:19.932549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.425 [2024-07-25 13:13:19.932575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:19.932599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.425 [2024-07-25 13:13:19.932614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:19.932635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.425 [2024-07-25 13:13:19.932651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:19.932672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.425 [2024-07-25 13:13:19.932687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.515797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.425 [2024-07-25 13:13:26.515883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.515923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.425 [2024-07-25 13:13:26.515941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.515964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.425 [2024-07-25 13:13:26.515979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.516001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.425 [2024-07-25 13:13:26.516015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.516037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.425 [2024-07-25 13:13:26.516052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.516073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.425 [2024-07-25 13:13:26.516088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.516110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.425 [2024-07-25 13:13:26.516125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.516147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.425 [2024-07-25 13:13:26.516162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.516189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.425 [2024-07-25 13:13:26.516248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.516276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.425 [2024-07-25 13:13:26.516292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.516314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.425 [2024-07-25 13:13:26.516329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.516350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.425 [2024-07-25 13:13:26.516364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.516386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.425 [2024-07-25 13:13:26.516400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.516424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.425 [2024-07-25 13:13:26.516440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.516462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:63248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.425 [2024-07-25 13:13:26.516476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.516498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.425 [2024-07-25 13:13:26.516512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.516557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.425 [2024-07-25 13:13:26.516576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.516600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.425 [2024-07-25 13:13:26.516615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.516637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.425 [2024-07-25 13:13:26.516651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.516673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.425 [2024-07-25 13:13:26.516687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.516709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.425 [2024-07-25 13:13:26.516723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.516777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.425 [2024-07-25 13:13:26.516795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.516818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.425 [2024-07-25 13:13:26.516847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.516872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.425 [2024-07-25 13:13:26.516887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.516909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.425 [2024-07-25 13:13:26.516924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.516946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.425 [2024-07-25 13:13:26.516961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.516983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.425 [2024-07-25 13:13:26.516998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.517019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.425 [2024-07-25 13:13:26.517034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.517056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.425 [2024-07-25 13:13:26.517070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.517092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.425 [2024-07-25 13:13:26.517107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.517129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.425 [2024-07-25 13:13:26.517143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.517165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.425 [2024-07-25 13:13:26.517185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:15.425 [2024-07-25 13:13:26.517221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.425 [2024-07-25 13:13:26.517239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:15.426 [2024-07-25 13:13:26.517282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.426 [2024-07-25 13:13:26.517301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:15.426 [2024-07-25 13:13:26.517324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.426 [2024-07-25 13:13:26.517339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:15.426 [2024-07-25 13:13:26.517361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.426 [2024-07-25 13:13:26.517375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:15.426 [2024-07-25 13:13:26.517407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.426 [2024-07-25 13:13:26.517436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:15.426 [2024-07-25 13:13:26.517477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.426 [2024-07-25 13:13:26.517505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:15.426 [2024-07-25 13:13:26.517541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.426 [2024-07-25 13:13:26.517565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:15.426 [2024-07-25 13:13:26.517601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.426 [2024-07-25 13:13:26.517632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:15.426 [2024-07-25 13:13:26.517666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.426 [2024-07-25 13:13:26.517694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:15.426 [2024-07-25 13:13:26.517718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.426 [2024-07-25 13:13:26.517733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:15.426 [2024-07-25 13:13:26.517755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.426 [2024-07-25 13:13:26.517770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:15.426 [2024-07-25 13:13:26.517791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.426 [2024-07-25 13:13:26.517806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:15.426 [2024-07-25 13:13:26.517827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.426 [2024-07-25 13:13:26.517859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:15.426 [2024-07-25 13:13:26.517882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.426 [2024-07-25 13:13:26.517909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:15.426 [2024-07-25 13:13:26.517932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.426 [2024-07-25 13:13:26.517947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:15.426 [2024-07-25 13:13:26.517970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.426 [2024-07-25 13:13:26.517985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:15.426 [2024-07-25 13:13:26.518033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.426 [2024-07-25 13:13:26.518052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:15.426 [2024-07-25 13:13:26.518076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.426 [2024-07-25 13:13:26.518091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:15.426 [2024-07-25 13:13:26.518112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.426 [2024-07-25 13:13:26.518128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:15.426 [2024-07-25 13:13:26.518150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.426 [2024-07-25 13:13:26.518164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:15.426 [2024-07-25 13:13:26.518193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.426 [2024-07-25 13:13:26.518218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:15.426 [2024-07-25 13:13:26.518242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.426 [2024-07-25 13:13:26.518258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.426 [2024-07-25 13:13:26.518280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.426 [2024-07-25 13:13:26.518294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.426 [2024-07-25 13:13:26.518316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.426 [2024-07-25 13:13:26.518330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:15.426 [2024-07-25 13:13:26.518352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.426 [2024-07-25 13:13:26.518368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:15.426 [2024-07-25 13:13:26.518390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.426 [2024-07-25 13:13:26.518413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:15.426 [2024-07-25 13:13:26.518437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.426 [2024-07-25 13:13:26.518452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:15.426 [2024-07-25 13:13:26.518474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.426 [2024-07-25 13:13:26.518489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:15.426 [2024-07-25 13:13:26.518510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.426 [2024-07-25 13:13:26.518525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:15.426 [2024-07-25 13:13:26.518547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.426 [2024-07-25 13:13:26.518562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:15.426 [2024-07-25 13:13:26.518595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.426 [2024-07-25 13:13:26.518610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:15.426 [2024-07-25 13:13:26.518632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.427 [2024-07-25 13:13:26.518646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.518674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.427 [2024-07-25 13:13:26.518689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.518712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.427 [2024-07-25 13:13:26.518727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.518748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.427 [2024-07-25 13:13:26.518763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.518785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.427 [2024-07-25 13:13:26.518800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.518822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.427 [2024-07-25 13:13:26.518852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.518876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.427 [2024-07-25 13:13:26.518892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.519006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.427 [2024-07-25 13:13:26.519024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.519045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.427 [2024-07-25 13:13:26.519060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.519082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.427 [2024-07-25 13:13:26.519098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.519119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.427 [2024-07-25 13:13:26.519134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.519155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.427 [2024-07-25 13:13:26.519171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.519206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.427 [2024-07-25 13:13:26.519225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.519248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.427 [2024-07-25 13:13:26.519264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.519296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.427 [2024-07-25 13:13:26.519311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.519332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.427 [2024-07-25 13:13:26.519348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.519369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.427 [2024-07-25 13:13:26.519384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.519405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.427 [2024-07-25 13:13:26.519420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.519442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.427 [2024-07-25 13:13:26.519457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.519495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.427 [2024-07-25 13:13:26.519511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.519533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.427 [2024-07-25 13:13:26.519548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.519570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.427 [2024-07-25 13:13:26.519585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.519606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.427 [2024-07-25 13:13:26.519621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.519643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.427 [2024-07-25 13:13:26.519657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.519679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.427 [2024-07-25 13:13:26.519694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.519720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.427 [2024-07-25 13:13:26.519736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.519758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.427 [2024-07-25 13:13:26.519773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.519794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.427 [2024-07-25 13:13:26.519809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.519844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.427 [2024-07-25 13:13:26.519862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.519885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.427 [2024-07-25 13:13:26.519905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.519927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.427 [2024-07-25 13:13:26.519942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.519964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.427 [2024-07-25 13:13:26.519986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.520009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.427 [2024-07-25 13:13:26.520025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.520047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:63488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.427 [2024-07-25 13:13:26.520061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.520084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.427 [2024-07-25 13:13:26.520099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.520121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.427 [2024-07-25 13:13:26.520136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:15.427 [2024-07-25 13:13:26.521451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.428 [2024-07-25 13:13:26.521482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.521512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.428 [2024-07-25 13:13:26.521529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.521552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.428 [2024-07-25 13:13:26.521566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.521600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.428 [2024-07-25 13:13:26.521615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.521637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.428 [2024-07-25 13:13:26.521653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.521675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.428 [2024-07-25 13:13:26.521690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.521711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.428 [2024-07-25 13:13:26.521726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.521748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.428 [2024-07-25 13:13:26.521775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.522145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.428 [2024-07-25 13:13:26.522173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.522213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.428 [2024-07-25 13:13:26.522238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.522262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.428 [2024-07-25 13:13:26.522277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.522299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.428 [2024-07-25 13:13:26.522313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.522336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.428 [2024-07-25 13:13:26.522351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.522372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.428 [2024-07-25 13:13:26.522387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.522409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.428 [2024-07-25 13:13:26.522424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.522446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.428 [2024-07-25 13:13:26.522461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.522483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.428 [2024-07-25 13:13:26.522498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.522520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.428 [2024-07-25 13:13:26.522539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.522561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.428 [2024-07-25 13:13:26.522576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.522604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.428 [2024-07-25 13:13:26.522619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.522653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.428 [2024-07-25 13:13:26.522669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.522691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.428 [2024-07-25 13:13:26.522705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.522727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.428 [2024-07-25 13:13:26.522742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.522764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.428 [2024-07-25 13:13:26.522779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.522800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.428 [2024-07-25 13:13:26.522815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.522851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.428 [2024-07-25 13:13:26.522870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.522892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.428 [2024-07-25 13:13:26.522907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.522928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.428 [2024-07-25 13:13:26.522943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.522964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.428 [2024-07-25 13:13:26.522979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.523001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.428 [2024-07-25 13:13:26.523016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.523044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.428 [2024-07-25 13:13:26.523060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.523082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.428 [2024-07-25 13:13:26.523097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.523143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.428 [2024-07-25 13:13:26.523163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.523193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.428 [2024-07-25 13:13:26.523221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.523256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.428 [2024-07-25 13:13:26.523271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.523298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.428 [2024-07-25 13:13:26.523314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.523336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.428 [2024-07-25 13:13:26.523351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:15.428 [2024-07-25 13:13:26.523372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.428 [2024-07-25 13:13:26.523387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.523408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.429 [2024-07-25 13:13:26.523423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.523445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.429 [2024-07-25 13:13:26.523460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.523481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.429 [2024-07-25 13:13:26.523496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.523518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.429 [2024-07-25 13:13:26.523532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.523554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.429 [2024-07-25 13:13:26.523569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.523590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.429 [2024-07-25 13:13:26.523605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.523627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.429 [2024-07-25 13:13:26.523651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.523674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.429 [2024-07-25 13:13:26.523689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.523722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.429 [2024-07-25 13:13:26.523737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.523759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.429 [2024-07-25 13:13:26.523774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.523795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.429 [2024-07-25 13:13:26.523810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.523844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.429 [2024-07-25 13:13:26.523862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.523884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.429 [2024-07-25 13:13:26.523899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.523926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.429 [2024-07-25 13:13:26.523942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.523963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.429 [2024-07-25 13:13:26.523978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.524000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.429 [2024-07-25 13:13:26.524014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.524036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.429 [2024-07-25 13:13:26.524051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.524072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.429 [2024-07-25 13:13:26.524087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.524113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.429 [2024-07-25 13:13:26.524137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.524160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.429 [2024-07-25 13:13:26.524178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.524211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.429 [2024-07-25 13:13:26.524229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.524251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.429 [2024-07-25 13:13:26.524266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.524287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.429 [2024-07-25 13:13:26.524302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.524323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.429 [2024-07-25 13:13:26.524338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.524365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.429 [2024-07-25 13:13:26.524381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.524402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.429 [2024-07-25 13:13:26.524417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.524439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.429 [2024-07-25 13:13:26.524454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.524475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.429 [2024-07-25 13:13:26.524490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.524511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.429 [2024-07-25 13:13:26.524526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.524552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.429 [2024-07-25 13:13:26.524568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.524601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.429 [2024-07-25 13:13:26.524616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.524646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.429 [2024-07-25 13:13:26.524662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.524684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.429 [2024-07-25 13:13:26.524699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.524720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.429 [2024-07-25 13:13:26.524735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.524773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.429 [2024-07-25 13:13:26.524790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.524812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.429 [2024-07-25 13:13:26.524827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.524865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.429 [2024-07-25 13:13:26.524881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:15.429 [2024-07-25 13:13:26.524903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.430 [2024-07-25 13:13:26.524918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:15.430 [2024-07-25 13:13:26.524940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.430 [2024-07-25 13:13:26.524954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:15.430 [2024-07-25 13:13:26.524976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.430 [2024-07-25 13:13:26.524990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:15.430 [2024-07-25 13:13:26.525018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.430 [2024-07-25 13:13:26.525033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:15.430 [2024-07-25 13:13:26.525055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.430 [2024-07-25 13:13:26.525069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:15.430 [2024-07-25 13:13:26.525096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.430 [2024-07-25 13:13:26.525112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:15.430 [2024-07-25 13:13:26.525143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.430 [2024-07-25 13:13:26.525159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:15.430 [2024-07-25 13:13:26.525185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.430 [2024-07-25 13:13:26.525209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.430 [2024-07-25 13:13:26.525234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.430 [2024-07-25 13:13:26.525250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.430 [2024-07-25 13:13:26.525271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.430 [2024-07-25 13:13:26.525286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:15.430 [2024-07-25 13:13:26.525308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.430 [2024-07-25 13:13:26.525322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:15.430 [2024-07-25 13:13:26.525344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.430 [2024-07-25 13:13:26.525359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:15.430 [2024-07-25 13:13:26.525380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.430 [2024-07-25 13:13:26.525395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:15.430 [2024-07-25 13:13:26.525417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.430 [2024-07-25 13:13:26.525431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:15.430 [2024-07-25 13:13:26.525453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.430 [2024-07-25 13:13:26.525468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:15.430 [2024-07-25 13:13:26.525489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.430 [2024-07-25 13:13:26.525504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:15.430 [2024-07-25 13:13:26.525526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.430 [2024-07-25 13:13:26.525540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:15.430 [2024-07-25 13:13:26.536624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.430 [2024-07-25 13:13:26.536663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:15.430 [2024-07-25 13:13:26.536689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.430 [2024-07-25 13:13:26.536721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:15.430 [2024-07-25 13:13:26.536762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.430 [2024-07-25 13:13:26.536782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:15.430 [2024-07-25 13:13:26.536806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.430 [2024-07-25 13:13:26.536821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:15.430 [2024-07-25 13:13:26.536859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.430 [2024-07-25 13:13:26.536875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:15.430 [2024-07-25 13:13:26.536898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.430 [2024-07-25 13:13:26.536914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:15.430 [2024-07-25 13:13:26.536935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.430 [2024-07-25 13:13:26.536951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:15.430 [2024-07-25 13:13:26.536973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.430 [2024-07-25 13:13:26.536988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:15.430 [2024-07-25 13:13:26.537010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.430 [2024-07-25 13:13:26.537025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:15.430 [2024-07-25 13:13:26.537047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.430 [2024-07-25 13:13:26.537062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:15.430 [2024-07-25 13:13:26.537084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.430 [2024-07-25 13:13:26.537099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:15.430 [2024-07-25 13:13:26.537121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.430 [2024-07-25 13:13:26.537136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:15.430 [2024-07-25 13:13:26.537158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.430 [2024-07-25 13:13:26.537176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:15.430 [2024-07-25 13:13:26.537211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.430 [2024-07-25 13:13:26.537245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.537269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.431 [2024-07-25 13:13:26.537285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.537307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.431 [2024-07-25 13:13:26.537322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.537344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.431 [2024-07-25 13:13:26.537359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.537381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.431 [2024-07-25 13:13:26.537396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.537427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.431 [2024-07-25 13:13:26.537442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.537465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.431 [2024-07-25 13:13:26.537481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.537511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.431 [2024-07-25 13:13:26.537527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.537550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.431 [2024-07-25 13:13:26.537565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.537606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.431 [2024-07-25 13:13:26.537621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.537643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.431 [2024-07-25 13:13:26.537658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.537681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.431 [2024-07-25 13:13:26.537696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.537724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.431 [2024-07-25 13:13:26.537739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.537770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.431 [2024-07-25 13:13:26.537786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.537808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.431 [2024-07-25 13:13:26.537823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.537860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.431 [2024-07-25 13:13:26.537877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.537899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.431 [2024-07-25 13:13:26.537913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.537935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.431 [2024-07-25 13:13:26.537950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.537972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.431 [2024-07-25 13:13:26.537987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.538009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.431 [2024-07-25 13:13:26.538024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.538046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.431 [2024-07-25 13:13:26.538061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.538083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.431 [2024-07-25 13:13:26.538098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.538120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.431 [2024-07-25 13:13:26.538135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.538157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.431 [2024-07-25 13:13:26.538174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.538209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.431 [2024-07-25 13:13:26.538229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.538260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.431 [2024-07-25 13:13:26.538276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.538298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.431 [2024-07-25 13:13:26.538321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.538343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.431 [2024-07-25 13:13:26.538358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.538380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.431 [2024-07-25 13:13:26.538396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.538418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.431 [2024-07-25 13:13:26.538433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.540354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.431 [2024-07-25 13:13:26.540387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.540418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.431 [2024-07-25 13:13:26.540436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.540458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.431 [2024-07-25 13:13:26.540473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.540495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.431 [2024-07-25 13:13:26.540511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.540533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.431 [2024-07-25 13:13:26.540554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.540576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.431 [2024-07-25 13:13:26.540591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:15.431 [2024-07-25 13:13:26.540613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.431 [2024-07-25 13:13:26.540628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.540650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-25 13:13:26.540680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.540704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-25 13:13:26.540719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.540741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-25 13:13:26.540774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.540797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-25 13:13:26.540813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.540849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-25 13:13:26.540868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.540890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-25 13:13:26.540906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.540929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-25 13:13:26.540944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.540966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-25 13:13:26.540981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.541003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-25 13:13:26.541018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.541040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-25 13:13:26.541054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.541076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-25 13:13:26.541110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.541140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-25 13:13:26.541159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.541198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-25 13:13:26.541242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.541275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-25 13:13:26.541296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.541325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-25 13:13:26.541346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.541376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.432 [2024-07-25 13:13:26.541396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.541432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.432 [2024-07-25 13:13:26.541454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.541484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.432 [2024-07-25 13:13:26.541504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.541534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.432 [2024-07-25 13:13:26.541553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.541583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.432 [2024-07-25 13:13:26.541603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.541632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.432 [2024-07-25 13:13:26.541653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.541699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.432 [2024-07-25 13:13:26.541740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.541793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.432 [2024-07-25 13:13:26.541830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.541909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-25 13:13:26.541948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.542000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-25 13:13:26.542053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.542109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-25 13:13:26.542148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.542195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-25 13:13:26.542234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.542285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-25 13:13:26.542321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.542371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-25 13:13:26.542408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.542454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-25 13:13:26.542489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.542535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-25 13:13:26.542572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.542621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.432 [2024-07-25 13:13:26.542658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.542711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.432 [2024-07-25 13:13:26.542756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.542803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.432 [2024-07-25 13:13:26.542869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.542926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.432 [2024-07-25 13:13:26.542965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.543016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.432 [2024-07-25 13:13:26.543055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.543108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.432 [2024-07-25 13:13:26.543146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.432 [2024-07-25 13:13:26.543229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.432 [2024-07-25 13:13:26.543271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.543327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.433 [2024-07-25 13:13:26.543366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.543417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.433 [2024-07-25 13:13:26.543457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.543518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.433 [2024-07-25 13:13:26.543564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.543619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.433 [2024-07-25 13:13:26.543655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.543705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.433 [2024-07-25 13:13:26.543740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.543786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.433 [2024-07-25 13:13:26.543821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.543911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.433 [2024-07-25 13:13:26.543947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.544004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.433 [2024-07-25 13:13:26.544041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.544085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.433 [2024-07-25 13:13:26.544119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.544165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.433 [2024-07-25 13:13:26.544219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.544268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.433 [2024-07-25 13:13:26.544304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.544369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.433 [2024-07-25 13:13:26.544407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.544455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.433 [2024-07-25 13:13:26.544492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.544540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.433 [2024-07-25 13:13:26.544588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.544637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.433 [2024-07-25 13:13:26.544671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.544716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.433 [2024-07-25 13:13:26.544768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.544822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.433 [2024-07-25 13:13:26.544891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.544942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.433 [2024-07-25 13:13:26.544982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.545032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.433 [2024-07-25 13:13:26.545070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.545117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.433 [2024-07-25 13:13:26.545150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.545197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.433 [2024-07-25 13:13:26.545233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.545280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.433 [2024-07-25 13:13:26.545317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.545354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.433 [2024-07-25 13:13:26.545375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.545404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.433 [2024-07-25 13:13:26.545440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.545471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.433 [2024-07-25 13:13:26.545492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.545522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.433 [2024-07-25 13:13:26.545543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.545573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.433 [2024-07-25 13:13:26.545593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.545623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.433 [2024-07-25 13:13:26.545643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.545672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.433 [2024-07-25 13:13:26.545692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.545721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.433 [2024-07-25 13:13:26.545741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.545771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.433 [2024-07-25 13:13:26.545791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.545820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.433 [2024-07-25 13:13:26.545859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.433 [2024-07-25 13:13:26.545892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.434 [2024-07-25 13:13:26.545914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.545943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.434 [2024-07-25 13:13:26.545963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.545993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.434 [2024-07-25 13:13:26.546013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.546042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.434 [2024-07-25 13:13:26.546076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.546107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.434 [2024-07-25 13:13:26.546128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.546157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.434 [2024-07-25 13:13:26.546183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.546229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.434 [2024-07-25 13:13:26.546253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.546293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.434 [2024-07-25 13:13:26.546314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.546343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.434 [2024-07-25 13:13:26.546363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.546393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.434 [2024-07-25 13:13:26.546413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.546443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.434 [2024-07-25 13:13:26.546463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.546493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.434 [2024-07-25 13:13:26.546513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.546542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.434 [2024-07-25 13:13:26.546562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.546591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.434 [2024-07-25 13:13:26.546611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.546640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.434 [2024-07-25 13:13:26.546660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.546689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.434 [2024-07-25 13:13:26.546709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.546751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.434 [2024-07-25 13:13:26.546773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.546802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.434 [2024-07-25 13:13:26.546823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.546873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.434 [2024-07-25 13:13:26.546895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.546925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.434 [2024-07-25 13:13:26.546945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.546980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.434 [2024-07-25 13:13:26.547000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.547030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.434 [2024-07-25 13:13:26.547050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.547079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.434 [2024-07-25 13:13:26.547099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.547129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.434 [2024-07-25 13:13:26.547149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.547181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.434 [2024-07-25 13:13:26.547216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.547250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.434 [2024-07-25 13:13:26.547279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.547347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.434 [2024-07-25 13:13:26.547373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.547404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.434 [2024-07-25 13:13:26.547425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.547469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.434 [2024-07-25 13:13:26.547491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.547521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.434 [2024-07-25 13:13:26.547541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.547570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.434 [2024-07-25 13:13:26.547590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.547620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.434 [2024-07-25 13:13:26.547640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.547670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.434 [2024-07-25 13:13:26.547690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.547720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.434 [2024-07-25 13:13:26.547740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:15.434 [2024-07-25 13:13:26.547770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.435 [2024-07-25 13:13:26.547790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:15.435 [2024-07-25 13:13:26.547819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.435 [2024-07-25 13:13:26.547857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:15.435 [2024-07-25 13:13:26.547888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.435 [2024-07-25 13:13:26.547909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:15.435 [2024-07-25 13:13:26.547938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.435 [2024-07-25 13:13:26.547958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:15.435 [2024-07-25 13:13:26.547988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.435 [2024-07-25 13:13:26.548007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:15.435 [2024-07-25 13:13:26.548042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.435 [2024-07-25 13:13:26.548061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:15.435 [2024-07-25 13:13:26.548090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.435 [2024-07-25 13:13:26.548119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:15.435 [2024-07-25 13:13:26.548150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.435 [2024-07-25 13:13:26.548172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:15.435 [2024-07-25 13:13:26.548219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.435 [2024-07-25 13:13:26.548243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:15.435 [2024-07-25 13:13:26.548273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.435 [2024-07-25 13:13:26.548292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:15.435 [2024-07-25 13:13:26.548322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.435 [2024-07-25 13:13:26.548341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:15.435 [2024-07-25 13:13:26.548371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.435 [2024-07-25 13:13:26.548390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:15.435 [2024-07-25 13:13:26.548419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.435 [2024-07-25 13:13:26.548439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:15.435 [2024-07-25 13:13:26.548468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.435 [2024-07-25 13:13:26.548488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:15.435 [2024-07-25 13:13:26.548517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.435 [2024-07-25 13:13:26.548537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:15.435 [2024-07-25 13:13:26.548566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.435 [2024-07-25 13:13:26.548586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:15.435 [2024-07-25 13:13:26.549250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.435 [2024-07-25 13:13:26.549287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:15.435 [2024-07-25 13:13:33.537052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.435 [2024-07-25 13:13:33.537134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:15.435 [2024-07-25 13:13:33.537197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.435 [2024-07-25 13:13:33.537244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:15.435 [2024-07-25 13:13:33.537270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.435 [2024-07-25 13:13:33.537286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:15.435 [2024-07-25 13:13:33.537308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.435 [2024-07-25 13:13:33.537323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:15.435 [2024-07-25 13:13:33.537345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.435 [2024-07-25 13:13:33.537360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:15.435 [2024-07-25 13:13:33.537381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.435 [2024-07-25 13:13:33.537396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:15.435 [2024-07-25 13:13:33.537417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.435 [2024-07-25 13:13:33.537432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:15.435 [2024-07-25 13:13:33.537453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.435 [2024-07-25 13:13:33.537468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:15.435 [2024-07-25 13:13:33.537490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.435 [2024-07-25 13:13:33.537505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:15.435 [2024-07-25 13:13:33.537526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.435 [2024-07-25 13:13:33.537541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:15.435 [2024-07-25 13:13:33.537563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.435 [2024-07-25 13:13:33.537578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:15.435 [2024-07-25 13:13:33.537599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.435 [2024-07-25 13:13:33.537613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:15.435 [2024-07-25 13:13:33.537634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.435 [2024-07-25 13:13:33.537649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:15.435 [2024-07-25 13:13:33.537670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.435 [2024-07-25 13:13:33.537685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:15.435 [2024-07-25 13:13:33.537715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.435 [2024-07-25 13:13:33.537731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.537752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.436 [2024-07-25 13:13:33.537767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.537789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.436 [2024-07-25 13:13:33.537804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.537827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.436 [2024-07-25 13:13:33.537857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.537881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.436 [2024-07-25 13:13:33.537896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.537919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.436 [2024-07-25 13:13:33.537934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.537955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.436 [2024-07-25 13:13:33.537970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.537992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.436 [2024-07-25 13:13:33.538008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.538029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.436 [2024-07-25 13:13:33.538044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.538066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.436 [2024-07-25 13:13:33.538081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.538108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.436 [2024-07-25 13:13:33.538124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.538146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.436 [2024-07-25 13:13:33.538160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.538195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.436 [2024-07-25 13:13:33.538211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.538233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.436 [2024-07-25 13:13:33.538248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.538269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.436 [2024-07-25 13:13:33.538284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.538306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.436 [2024-07-25 13:13:33.538321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.538343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.436 [2024-07-25 13:13:33.538357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.538381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.436 [2024-07-25 13:13:33.538404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.538425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.436 [2024-07-25 13:13:33.538440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.538462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.436 [2024-07-25 13:13:33.538477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.538499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.436 [2024-07-25 13:13:33.538514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.538535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.436 [2024-07-25 13:13:33.538550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.538572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.436 [2024-07-25 13:13:33.538587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.538609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.436 [2024-07-25 13:13:33.538624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.538646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.436 [2024-07-25 13:13:33.538668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.538691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.436 [2024-07-25 13:13:33.538706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.538727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.436 [2024-07-25 13:13:33.538742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.538764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.436 [2024-07-25 13:13:33.538778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.538800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.436 [2024-07-25 13:13:33.538815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.538848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.436 [2024-07-25 13:13:33.538866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.538888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.436 [2024-07-25 13:13:33.538903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.538925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.436 [2024-07-25 13:13:33.538940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.538961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.436 [2024-07-25 13:13:33.538976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.538998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.436 [2024-07-25 13:13:33.539013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.539035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.436 [2024-07-25 13:13:33.539050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:15.436 [2024-07-25 13:13:33.539072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.436 [2024-07-25 13:13:33.539087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.539109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.437 [2024-07-25 13:13:33.539132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.539155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.437 [2024-07-25 13:13:33.539170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.539192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.437 [2024-07-25 13:13:33.539208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.539229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.437 [2024-07-25 13:13:33.539244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.539266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.437 [2024-07-25 13:13:33.539281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.539304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.437 [2024-07-25 13:13:33.539319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.539345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.437 [2024-07-25 13:13:33.539361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.539383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.437 [2024-07-25 13:13:33.539399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.539421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.437 [2024-07-25 13:13:33.539436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.539457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.437 [2024-07-25 13:13:33.539472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.539494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.437 [2024-07-25 13:13:33.539509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.539530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.437 [2024-07-25 13:13:33.539545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.539567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.437 [2024-07-25 13:13:33.539582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.539611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.437 [2024-07-25 13:13:33.539627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.539648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.437 [2024-07-25 13:13:33.539664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.539686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.437 [2024-07-25 13:13:33.539702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.539723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.437 [2024-07-25 13:13:33.539738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.539760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.437 [2024-07-25 13:13:33.539776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.539797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.437 [2024-07-25 13:13:33.539813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.539848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.437 [2024-07-25 13:13:33.539866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.539889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.437 [2024-07-25 13:13:33.539904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.539925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.437 [2024-07-25 13:13:33.539940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.539962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.437 [2024-07-25 13:13:33.539976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.539998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.437 [2024-07-25 13:13:33.540013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.540035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.437 [2024-07-25 13:13:33.540050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.540080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.437 [2024-07-25 13:13:33.540095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.540116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.437 [2024-07-25 13:13:33.540131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.540153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.437 [2024-07-25 13:13:33.540168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.540189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.437 [2024-07-25 13:13:33.540204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.540225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.437 [2024-07-25 13:13:33.540241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.540262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.437 [2024-07-25 13:13:33.540277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.540300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.437 [2024-07-25 13:13:33.540315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.540337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.437 [2024-07-25 13:13:33.540352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.540374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.437 [2024-07-25 13:13:33.540389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:15.437 [2024-07-25 13:13:33.540411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.438 [2024-07-25 13:13:33.540426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.540447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.438 [2024-07-25 13:13:33.540462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.540483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.438 [2024-07-25 13:13:33.540498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.540527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.438 [2024-07-25 13:13:33.540542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.540564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.438 [2024-07-25 13:13:33.540579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.540600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.438 [2024-07-25 13:13:33.540615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.540638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.438 [2024-07-25 13:13:33.540652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.540674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.438 [2024-07-25 13:13:33.540690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.540711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.438 [2024-07-25 13:13:33.540726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.540757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.438 [2024-07-25 13:13:33.540779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.540801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.438 [2024-07-25 13:13:33.540817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.540850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.438 [2024-07-25 13:13:33.540868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.540890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.438 [2024-07-25 13:13:33.540905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.540935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.438 [2024-07-25 13:13:33.540951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.540979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.438 [2024-07-25 13:13:33.540994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.541016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.438 [2024-07-25 13:13:33.541039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.541061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.438 [2024-07-25 13:13:33.541076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.541098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.438 [2024-07-25 13:13:33.541112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.541134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.438 [2024-07-25 13:13:33.541149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.541170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.438 [2024-07-25 13:13:33.541185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.541206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.438 [2024-07-25 13:13:33.541221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.541242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.438 [2024-07-25 13:13:33.541257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.541278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.438 [2024-07-25 13:13:33.541293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.541314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.438 [2024-07-25 13:13:33.541329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.541350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.438 [2024-07-25 13:13:33.541365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.541387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.438 [2024-07-25 13:13:33.541401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.541423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.438 [2024-07-25 13:13:33.541438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.542130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.438 [2024-07-25 13:13:33.542170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.542208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.438 [2024-07-25 13:13:33.542235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.542271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.438 [2024-07-25 13:13:33.542287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.542317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.438 [2024-07-25 13:13:33.542332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.542363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.438 [2024-07-25 13:13:33.542378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.542408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.438 [2024-07-25 13:13:33.542423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.542453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.438 [2024-07-25 13:13:33.542468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.542499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.438 [2024-07-25 13:13:33.542514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:15.438 [2024-07-25 13:13:33.542561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.438 [2024-07-25 13:13:33.542581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:33.542612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.439 [2024-07-25 13:13:33.542628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:33.542658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.439 [2024-07-25 13:13:33.542673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:33.542703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.439 [2024-07-25 13:13:33.542718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:33.542748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.439 [2024-07-25 13:13:33.542763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:33.542803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.439 [2024-07-25 13:13:33.542820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:33.542863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.439 [2024-07-25 13:13:33.542881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:33.542912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.439 [2024-07-25 13:13:33.542927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:33.542957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.439 [2024-07-25 13:13:33.542972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:46.903820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:126272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.439 [2024-07-25 13:13:46.903918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:46.903985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:126280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.439 [2024-07-25 13:13:46.904007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:46.904030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:126288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.439 [2024-07-25 13:13:46.904046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:46.904078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:126296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.439 [2024-07-25 13:13:46.904093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:46.904115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:126304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.439 [2024-07-25 13:13:46.904129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:46.904151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:126312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.439 [2024-07-25 13:13:46.904166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:46.904188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:126320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.439 [2024-07-25 13:13:46.904203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:46.904225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:126328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.439 [2024-07-25 13:13:46.904239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:46.904293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:125760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.439 [2024-07-25 13:13:46.904310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:46.904332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.439 [2024-07-25 13:13:46.904346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:46.904368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:125776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.439 [2024-07-25 13:13:46.904382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:46.904404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.439 [2024-07-25 13:13:46.904419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:46.904440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.439 [2024-07-25 13:13:46.904454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:46.904476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.439 [2024-07-25 13:13:46.904491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:46.904512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:125808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.439 [2024-07-25 13:13:46.904527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:46.904549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:125816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.439 [2024-07-25 13:13:46.904563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:46.904625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:126336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.439 [2024-07-25 13:13:46.904646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:46.904667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:126344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.439 [2024-07-25 13:13:46.904682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:46.904697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:126352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.439 [2024-07-25 13:13:46.904711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:46.904727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:126360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.439 [2024-07-25 13:13:46.904740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:46.904770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:126368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.439 [2024-07-25 13:13:46.904798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:46.904815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.439 [2024-07-25 13:13:46.904829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:46.904861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:126384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.439 [2024-07-25 13:13:46.904875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:46.904891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.439 [2024-07-25 13:13:46.904905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:46.904921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:126400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.439 [2024-07-25 13:13:46.904934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:46.904950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:126408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.439 [2024-07-25 13:13:46.904963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:46.904979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:126416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.439 [2024-07-25 13:13:46.904992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:46.905007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:126424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.439 [2024-07-25 13:13:46.905021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.439 [2024-07-25 13:13:46.905036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:126432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.439 [2024-07-25 13:13:46.905050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.440 [2024-07-25 13:13:46.905065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:126440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.440 [2024-07-25 13:13:46.905079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.440 [2024-07-25 13:13:46.905094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:126448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.440 [2024-07-25 13:13:46.905108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.440 [2024-07-25 13:13:46.905124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.440 [2024-07-25 13:13:46.905137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.440 [2024-07-25 13:13:46.905153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:125824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.440 [2024-07-25 13:13:46.905167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.440 [2024-07-25 13:13:46.905191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.440 [2024-07-25 13:13:46.905206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.440 [2024-07-25 13:13:46.905223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:125840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.440 [2024-07-25 13:13:46.905237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.440 [2024-07-25 13:13:46.905253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.440 [2024-07-25 13:13:46.905267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.440 [2024-07-25 13:13:46.905282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:125856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.440 [2024-07-25 13:13:46.905296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.440 [2024-07-25 13:13:46.905311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:125864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.440 [2024-07-25 13:13:46.905325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.440 [2024-07-25 13:13:46.905340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:125872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.440 [2024-07-25 13:13:46.905354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.440 [2024-07-25 13:13:46.905371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:125880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.440 [2024-07-25 13:13:46.905384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.440 [2024-07-25 13:13:46.905400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.440 [2024-07-25 13:13:46.905413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.440 [2024-07-25 13:13:46.905429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.440 [2024-07-25 13:13:46.905443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.440 [2024-07-25 13:13:46.905460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.440 [2024-07-25 13:13:46.905473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.440 [2024-07-25 13:13:46.905489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:125912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.440 [2024-07-25 13:13:46.905502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.440 [2024-07-25 13:13:46.905518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:125920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.440 [2024-07-25 13:13:46.905532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.440 [2024-07-25 13:13:46.905548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.440 [2024-07-25 13:13:46.905568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.440 [2024-07-25 13:13:46.905585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.440 [2024-07-25 13:13:46.905599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.440 [2024-07-25 13:13:46.905614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.440 [2024-07-25 13:13:46.905628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.440 [2024-07-25 13:13:46.905643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.440 [2024-07-25 13:13:46.905656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.440 [2024-07-25 13:13:46.905672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:126472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.440 [2024-07-25 13:13:46.905686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.440 [2024-07-25 13:13:46.905702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:126480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.440 [2024-07-25 13:13:46.905716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.440 [2024-07-25 13:13:46.905732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:126488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.440 [2024-07-25 13:13:46.905745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.440 [2024-07-25 13:13:46.905761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:126496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.440 [2024-07-25 13:13:46.905775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.440 [2024-07-25 13:13:46.905790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:126504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.440 [2024-07-25 13:13:46.905803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.440 [2024-07-25 13:13:46.905819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:126512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.440 [2024-07-25 13:13:46.905847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.440 [2024-07-25 13:13:46.905866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:126520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.440 [2024-07-25 13:13:46.905880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.440 [2024-07-25 13:13:46.905895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:126528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.440 [2024-07-25 13:13:46.905909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.440 [2024-07-25 13:13:46.905925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:126536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.441 [2024-07-25 13:13:46.905938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.905961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:126544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.441 [2024-07-25 13:13:46.905976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.905991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:126552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.441 [2024-07-25 13:13:46.906005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:126560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.441 [2024-07-25 13:13:46.906036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:126568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.441 [2024-07-25 13:13:46.906065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:126576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.441 [2024-07-25 13:13:46.906094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:126584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.441 [2024-07-25 13:13:46.906123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.441 [2024-07-25 13:13:46.906152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:125960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.441 [2024-07-25 13:13:46.906183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.441 [2024-07-25 13:13:46.906213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.441 [2024-07-25 13:13:46.906244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:125984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.441 [2024-07-25 13:13:46.906273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.441 [2024-07-25 13:13:46.906303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.441 [2024-07-25 13:13:46.906332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.441 [2024-07-25 13:13:46.906368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.441 [2024-07-25 13:13:46.906397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.441 [2024-07-25 13:13:46.906428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.441 [2024-07-25 13:13:46.906457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:126040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.441 [2024-07-25 13:13:46.906486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.441 [2024-07-25 13:13:46.906515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.441 [2024-07-25 13:13:46.906544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.441 [2024-07-25 13:13:46.906574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.441 [2024-07-25 13:13:46.906603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:126592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.441 [2024-07-25 13:13:46.906632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:126600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.441 [2024-07-25 13:13:46.906662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:126608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.441 [2024-07-25 13:13:46.906692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:126616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.441 [2024-07-25 13:13:46.906731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:126624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.441 [2024-07-25 13:13:46.906761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:126632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.441 [2024-07-25 13:13:46.906791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:126640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.441 [2024-07-25 13:13:46.906820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:126648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.441 [2024-07-25 13:13:46.906862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:126656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.441 [2024-07-25 13:13:46.906892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:126664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.441 [2024-07-25 13:13:46.906921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.441 [2024-07-25 13:13:46.906951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.441 [2024-07-25 13:13:46.906980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.906996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.441 [2024-07-25 13:13:46.907009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.907025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:126696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.441 [2024-07-25 13:13:46.907040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.907056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:126704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.441 [2024-07-25 13:13:46.907069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.441 [2024-07-25 13:13:46.907085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:126712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.441 [2024-07-25 13:13:46.907099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.442 [2024-07-25 13:13:46.907121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:126080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.442 [2024-07-25 13:13:46.907135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.442 [2024-07-25 13:13:46.907152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.442 [2024-07-25 13:13:46.907166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.442 [2024-07-25 13:13:46.907182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.442 [2024-07-25 13:13:46.907195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.442 [2024-07-25 13:13:46.907211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:126104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.442 [2024-07-25 13:13:46.907225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.442 [2024-07-25 13:13:46.907241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.442 [2024-07-25 13:13:46.907254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.442 [2024-07-25 13:13:46.907270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.442 [2024-07-25 13:13:46.907283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.442 [2024-07-25 13:13:46.907299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.442 [2024-07-25 13:13:46.907313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.442 [2024-07-25 13:13:46.907328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.442 [2024-07-25 13:13:46.907342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.442 [2024-07-25 13:13:46.907358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.442 [2024-07-25 13:13:46.907371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.442 [2024-07-25 13:13:46.907387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.442 [2024-07-25 13:13:46.907400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.442 [2024-07-25 13:13:46.907416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.442 [2024-07-25 13:13:46.907429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.442 [2024-07-25 13:13:46.907445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.442 [2024-07-25 13:13:46.907459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.442 [2024-07-25 13:13:46.907474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.442 [2024-07-25 13:13:46.907493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.442 [2024-07-25 13:13:46.907520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.442 [2024-07-25 13:13:46.907535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.442 [2024-07-25 13:13:46.907551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.442 [2024-07-25 13:13:46.907564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.442 [2024-07-25 13:13:46.907579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fe680 is same with the state(5) to be set 00:18:15.442 [2024-07-25 13:13:46.907598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:15.442 [2024-07-25 13:13:46.907609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:15.442 [2024-07-25 13:13:46.907620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126200 len:8 PRP1 0x0 PRP2 0x0 00:18:15.442 [2024-07-25 13:13:46.907634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.442 [2024-07-25 13:13:46.907649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:15.442 [2024-07-25 13:13:46.907659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:15.442 [2024-07-25 13:13:46.907670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126720 len:8 PRP1 0x0 PRP2 0x0 00:18:15.442 [2024-07-25 13:13:46.907684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.442 [2024-07-25 13:13:46.907697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:15.442 [2024-07-25 13:13:46.907707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:15.442 [2024-07-25 13:13:46.907718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126728 len:8 PRP1 0x0 PRP2 0x0 00:18:15.442 [2024-07-25 13:13:46.907731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.442 [2024-07-25 13:13:46.907745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:15.442 [2024-07-25 13:13:46.907755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:15.442 [2024-07-25 13:13:46.907765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126736 len:8 PRP1 0x0 PRP2 0x0 00:18:15.442 [2024-07-25 13:13:46.907779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.442 [2024-07-25 13:13:46.907792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:15.442 [2024-07-25 13:13:46.907802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:15.442 [2024-07-25 13:13:46.907813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126744 len:8 PRP1 0x0 PRP2 0x0 00:18:15.442 [2024-07-25 13:13:46.907826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.442 [2024-07-25 13:13:46.907852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:15.442 [2024-07-25 13:13:46.907863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:15.442 [2024-07-25 13:13:46.907873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126752 len:8 PRP1 0x0 PRP2 0x0 00:18:15.442 [2024-07-25 13:13:46.907887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.442 [2024-07-25 13:13:46.907908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:15.442 [2024-07-25 13:13:46.907919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:15.442 [2024-07-25 13:13:46.907929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126760 len:8 PRP1 0x0 PRP2 0x0 00:18:15.442 [2024-07-25 13:13:46.907948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.442 [2024-07-25 13:13:46.907962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:15.442 [2024-07-25 13:13:46.907972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:15.442 [2024-07-25 13:13:46.907982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126768 len:8 PRP1 0x0 PRP2 0x0 00:18:15.442 [2024-07-25 13:13:46.907995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.442 [2024-07-25 13:13:46.908009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:15.442 [2024-07-25 13:13:46.908019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:15.442 [2024-07-25 13:13:46.908030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126776 len:8 PRP1 0x0 PRP2 0x0 00:18:15.442 [2024-07-25 13:13:46.908044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.442 [2024-07-25 13:13:46.908058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:15.442 [2024-07-25 13:13:46.908068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:15.442 [2024-07-25 13:13:46.908078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126208 len:8 PRP1 0x0 PRP2 0x0 00:18:15.442 [2024-07-25 13:13:46.908091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.442 [2024-07-25 13:13:46.908105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:15.442 [2024-07-25 13:13:46.908115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:15.442 [2024-07-25 13:13:46.908125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126216 len:8 PRP1 0x0 PRP2 0x0 00:18:15.442 [2024-07-25 13:13:46.908139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.442 [2024-07-25 13:13:46.908153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:15.442 [2024-07-25 13:13:46.908163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:15.442 [2024-07-25 13:13:46.908173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126224 len:8 PRP1 0x0 PRP2 0x0 00:18:15.442 [2024-07-25 13:13:46.908186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.442 [2024-07-25 13:13:46.908200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:15.442 [2024-07-25 13:13:46.908210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:15.442 [2024-07-25 13:13:46.908220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126232 len:8 PRP1 0x0 PRP2 0x0 00:18:15.442 [2024-07-25 13:13:46.908234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.443 [2024-07-25 13:13:46.908247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:15.443 [2024-07-25 13:13:46.908257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:15.443 [2024-07-25 13:13:46.908268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126240 len:8 PRP1 0x0 PRP2 0x0 00:18:15.443 [2024-07-25 13:13:46.908287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.443 [2024-07-25 13:13:46.908302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:15.443 [2024-07-25 13:13:46.908312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:15.443 [2024-07-25 13:13:46.908323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126248 len:8 PRP1 0x0 PRP2 0x0 00:18:15.443 [2024-07-25 13:13:46.908341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.443 [2024-07-25 13:13:46.908355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:15.443 [2024-07-25 13:13:46.908365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:15.443 [2024-07-25 13:13:46.908376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126256 len:8 PRP1 0x0 PRP2 0x0 00:18:15.443 [2024-07-25 13:13:46.908389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.443 [2024-07-25 13:13:46.908402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:15.443 [2024-07-25 13:13:46.908412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:15.443 [2024-07-25 13:13:46.908423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126264 len:8 PRP1 0x0 PRP2 0x0 00:18:15.443 [2024-07-25 13:13:46.908443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.443 [2024-07-25 13:13:46.908539] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24fe680 was disconnected and freed. reset controller. 00:18:15.443 [2024-07-25 13:13:46.908711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.443 [2024-07-25 13:13:46.908739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.443 [2024-07-25 13:13:46.908768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.443 [2024-07-25 13:13:46.908785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.443 [2024-07-25 13:13:46.908799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.443 [2024-07-25 13:13:46.908813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.443 [2024-07-25 13:13:46.908827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:15.443 [2024-07-25 13:13:46.908856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.443 [2024-07-25 13:13:46.908873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.443 [2024-07-25 13:13:46.908889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.443 [2024-07-25 13:13:46.908911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2480100 is same with the state(5) to be set 00:18:15.443 [2024-07-25 13:13:46.910101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:15.443 [2024-07-25 13:13:46.910146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2480100 (9): Bad file descriptor 00:18:15.443 [2024-07-25 13:13:46.910609] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:15.443 [2024-07-25 13:13:46.910651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2480100 with addr=10.0.0.2, port=4421 00:18:15.443 [2024-07-25 13:13:46.910670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2480100 is same with the state(5) to be set 00:18:15.443 [2024-07-25 13:13:46.910705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2480100 (9): Bad file descriptor 00:18:15.443 [2024-07-25 13:13:46.910737] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:15.443 [2024-07-25 13:13:46.910753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:15.443 [2024-07-25 13:13:46.910768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:15.443 [2024-07-25 13:13:46.910802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:15.443 [2024-07-25 13:13:46.910819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:15.443 [2024-07-25 13:13:56.972625] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:15.443 Received shutdown signal, test time was about 55.639089 seconds 00:18:15.443 00:18:15.443 Latency(us) 00:18:15.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.443 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:15.443 Verification LBA range: start 0x0 length 0x4000 00:18:15.443 Nvme0n1 : 55.64 7038.34 27.49 0.00 0.00 18155.42 793.13 7046430.72 00:18:15.443 =================================================================================================================== 00:18:15.443 Total : 7038.34 27.49 0.00 0.00 18155.42 793.13 7046430.72 00:18:15.443 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:15.701 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:18:15.701 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:15.701 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:18:15.701 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:15.701 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:18:15.959 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:15.959 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:18:15.959 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:15.959 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:15.959 rmmod nvme_tcp 00:18:15.959 rmmod nvme_fabrics 00:18:15.959 rmmod nvme_keyring 00:18:15.960 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:15.960 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:18:15.960 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:18:15.960 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 79857 ']' 00:18:15.960 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 79857 00:18:15.960 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 79857 ']' 00:18:15.960 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 79857 00:18:15.960 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:18:15.960 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:15.960 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79857 00:18:15.960 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:15.960 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:15.960 killing process with pid 79857 00:18:15.960 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79857' 00:18:15.960 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 79857 00:18:15.960 13:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 79857 00:18:16.220 13:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:16.220 13:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:16.220 13:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:16.220 13:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:16.220 13:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:16.220 13:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.220 13:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:16.220 13:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.221 13:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:16.221 00:18:16.221 real 1m1.764s 00:18:16.221 user 2m51.661s 00:18:16.221 sys 0m18.232s 00:18:16.221 13:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:16.221 13:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:16.221 ************************************ 00:18:16.221 END TEST nvmf_host_multipath 00:18:16.221 ************************************ 00:18:16.221 13:14:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:16.221 13:14:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:16.221 13:14:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:16.221 13:14:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.221 ************************************ 00:18:16.221 START TEST nvmf_timeout 00:18:16.221 ************************************ 00:18:16.221 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:16.504 * Looking for test storage... 00:18:16.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:16.504 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:16.505 Cannot find device "nvmf_tgt_br" 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:16.505 Cannot find device "nvmf_tgt_br2" 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:16.505 Cannot find device "nvmf_tgt_br" 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:16.505 Cannot find device "nvmf_tgt_br2" 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:16.505 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:16.505 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:16.505 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:16.764 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:16.764 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:16.764 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:16.764 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:16.764 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:16.764 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:16.764 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:16.764 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:16.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:16.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:18:16.764 00:18:16.764 --- 10.0.0.2 ping statistics --- 00:18:16.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.764 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:18:16.764 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:16.764 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:16.764 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:18:16.764 00:18:16.764 --- 10.0.0.3 ping statistics --- 00:18:16.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.764 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:18:16.764 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:16.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:16.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:18:16.764 00:18:16.764 --- 10.0.0.1 ping statistics --- 00:18:16.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.764 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:18:16.764 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:16.764 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:18:16.764 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:16.764 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:16.764 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:16.764 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:16.764 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:16.764 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:16.764 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:16.764 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:18:16.764 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:16.764 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:16.764 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:16.764 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=81024 00:18:16.764 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:16.764 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 81024 00:18:16.764 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 81024 ']' 00:18:16.764 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.764 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:16.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.764 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.765 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:16.765 13:14:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:16.765 [2024-07-25 13:14:08.845342] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:16.765 [2024-07-25 13:14:08.845431] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.022 [2024-07-25 13:14:08.983793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:17.022 [2024-07-25 13:14:09.093364] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.022 [2024-07-25 13:14:09.093436] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.022 [2024-07-25 13:14:09.093464] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.022 [2024-07-25 13:14:09.093473] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.022 [2024-07-25 13:14:09.093480] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.022 [2024-07-25 13:14:09.093927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.022 [2024-07-25 13:14:09.093937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.022 [2024-07-25 13:14:09.148881] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:17.958 13:14:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:17.958 13:14:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:18:17.958 13:14:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:17.958 13:14:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:17.958 13:14:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:17.958 13:14:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.958 13:14:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:17.958 13:14:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:17.958 [2024-07-25 13:14:10.118006] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.958 13:14:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:18.216 Malloc0 00:18:18.474 13:14:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:18.474 13:14:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:18.732 13:14:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:18.990 [2024-07-25 13:14:11.117065] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.990 13:14:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81073 00:18:18.990 13:14:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:18.990 13:14:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81073 /var/tmp/bdevperf.sock 00:18:18.990 13:14:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 81073 ']' 00:18:18.990 13:14:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:18.990 13:14:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:18.990 13:14:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:18.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:18.990 13:14:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:18.990 13:14:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:19.248 [2024-07-25 13:14:11.183724] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:19.248 [2024-07-25 13:14:11.183814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81073 ] 00:18:19.248 [2024-07-25 13:14:11.319793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.506 [2024-07-25 13:14:11.449873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.506 [2024-07-25 13:14:11.515647] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:20.072 13:14:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:20.072 13:14:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:18:20.072 13:14:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:20.330 13:14:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:20.895 NVMe0n1 00:18:20.895 13:14:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=81098 00:18:20.895 13:14:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:20.895 13:14:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:18:20.895 Running I/O for 10 seconds... 00:18:21.836 13:14:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:22.098 [2024-07-25 13:14:14.091564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.098 [2024-07-25 13:14:14.091627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.099 [2024-07-25 13:14:14.091652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.099 [2024-07-25 13:14:14.091665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.099 [2024-07-25 13:14:14.091678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.099 [2024-07-25 13:14:14.091688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.099 [2024-07-25 13:14:14.091700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.099 [2024-07-25 13:14:14.091710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.099 [2024-07-25 13:14:14.091722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.099 [2024-07-25 13:14:14.091732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.099 [2024-07-25 13:14:14.091743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.099 [2024-07-25 13:14:14.091753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.099 [2024-07-25 13:14:14.091765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.099 [2024-07-25 13:14:14.091774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.099 [2024-07-25 13:14:14.091786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.099 [2024-07-25 13:14:14.091796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.099 [2024-07-25 13:14:14.091807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.099 [2024-07-25 13:14:14.091817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.099 [2024-07-25 13:14:14.091841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.099 [2024-07-25 13:14:14.091853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.099 [2024-07-25 13:14:14.091864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c501b0 is same with the state(5) to be set 00:18:22.099 [2024-07-25 13:14:14.091877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.099 [2024-07-25 13:14:14.091886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.099 [2024-07-25 13:14:14.091894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64672 len:8 PRP1 0x0 PRP2 0x0 00:18:22.099 [2024-07-25 13:14:14.091904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.099 [2024-07-25 13:14:14.091914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.099 [2024-07-25 13:14:14.091923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.099 [2024-07-25 13:14:14.091931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64800 len:8 PRP1 0x0 PRP2 0x0 00:18:22.099 [2024-07-25 13:14:14.091941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.099 [2024-07-25 13:14:14.091951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.099 [2024-07-25 13:14:14.091958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.099 [2024-07-25 13:14:14.091967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64808 len:8 PRP1 0x0 PRP2 0x0 00:18:22.099 [2024-07-25 13:14:14.091976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.099 [2024-07-25 13:14:14.091985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.099 [2024-07-25 13:14:14.091996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.099 [2024-07-25 13:14:14.092005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64816 len:8 PRP1 0x0 PRP2 0x0 00:18:22.099 [2024-07-25 13:14:14.092014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.099 [2024-07-25 13:14:14.092024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.099 [2024-07-25 13:14:14.092032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.099 [2024-07-25 13:14:14.092040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64824 len:8 PRP1 0x0 PRP2 0x0 00:18:22.099 [2024-07-25 13:14:14.092049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.099 [2024-07-25 13:14:14.092059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.099 [2024-07-25 13:14:14.092067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.099 [2024-07-25 13:14:14.092075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64832 len:8 PRP1 0x0 PRP2 0x0 00:18:22.099 [2024-07-25 13:14:14.092084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.099 [2024-07-25 13:14:14.092094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.099 [2024-07-25 13:14:14.092102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.099 [2024-07-25 13:14:14.092110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64840 len:8 PRP1 0x0 PRP2 0x0 00:18:22.099 [2024-07-25 13:14:14.092119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.099 [2024-07-25 13:14:14.092128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.099 [2024-07-25 13:14:14.092136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.099 [2024-07-25 13:14:14.092144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64848 len:8 PRP1 0x0 PRP2 0x0 00:18:22.099 [2024-07-25 13:14:14.092153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.099 [2024-07-25 13:14:14.092172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.099 [2024-07-25 13:14:14.092180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.099 [2024-07-25 13:14:14.092188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64856 len:8 PRP1 0x0 PRP2 0x0 00:18:22.099 [2024-07-25 13:14:14.092197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.099 [2024-07-25 13:14:14.092207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.099 [2024-07-25 13:14:14.092214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.099 [2024-07-25 13:14:14.092222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64864 len:8 PRP1 0x0 PRP2 0x0 00:18:22.099 [2024-07-25 13:14:14.092232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.099 [2024-07-25 13:14:14.092242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.099 [2024-07-25 13:14:14.092250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.099 [2024-07-25 13:14:14.092258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64872 len:8 PRP1 0x0 PRP2 0x0 00:18:22.099 [2024-07-25 13:14:14.092268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.099 [2024-07-25 13:14:14.092277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.099 [2024-07-25 13:14:14.092286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.099 [2024-07-25 13:14:14.092294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64880 len:8 PRP1 0x0 PRP2 0x0 00:18:22.099 [2024-07-25 13:14:14.092304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.099 [2024-07-25 13:14:14.092313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.099 [2024-07-25 13:14:14.092321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.099 [2024-07-25 13:14:14.092330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64888 len:8 PRP1 0x0 PRP2 0x0 00:18:22.099 [2024-07-25 13:14:14.092339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.099 [2024-07-25 13:14:14.092349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.099 [2024-07-25 13:14:14.092357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.099 [2024-07-25 13:14:14.092365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64896 len:8 PRP1 0x0 PRP2 0x0 00:18:22.099 [2024-07-25 13:14:14.092374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.099 [2024-07-25 13:14:14.092383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.099 [2024-07-25 13:14:14.092391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.099 [2024-07-25 13:14:14.092399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64904 len:8 PRP1 0x0 PRP2 0x0 00:18:22.099 [2024-07-25 13:14:14.092409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.099 [2024-07-25 13:14:14.092418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.099 [2024-07-25 13:14:14.092426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.099 [2024-07-25 13:14:14.092434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64912 len:8 PRP1 0x0 PRP2 0x0 00:18:22.099 [2024-07-25 13:14:14.092442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.099 [2024-07-25 13:14:14.092452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.099 [2024-07-25 13:14:14.092460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.099 [2024-07-25 13:14:14.092468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64920 len:8 PRP1 0x0 PRP2 0x0 00:18:22.099 [2024-07-25 13:14:14.092477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.099 [2024-07-25 13:14:14.092487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.100 [2024-07-25 13:14:14.092494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.100 [2024-07-25 13:14:14.092503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64928 len:8 PRP1 0x0 PRP2 0x0 00:18:22.100 [2024-07-25 13:14:14.092512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.100 [2024-07-25 13:14:14.092522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.100 [2024-07-25 13:14:14.092530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.100 [2024-07-25 13:14:14.092538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64936 len:8 PRP1 0x0 PRP2 0x0 00:18:22.100 [2024-07-25 13:14:14.092547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.100 [2024-07-25 13:14:14.092557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.100 [2024-07-25 13:14:14.092566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.100 [2024-07-25 13:14:14.092574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64944 len:8 PRP1 0x0 PRP2 0x0 00:18:22.100 [2024-07-25 13:14:14.092583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.100 [2024-07-25 13:14:14.092592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.100 [2024-07-25 13:14:14.092601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.100 [2024-07-25 13:14:14.092609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64952 len:8 PRP1 0x0 PRP2 0x0 00:18:22.100 [2024-07-25 13:14:14.092618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.100 [2024-07-25 13:14:14.092628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.100 [2024-07-25 13:14:14.092636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.100 [2024-07-25 13:14:14.092644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64960 len:8 PRP1 0x0 PRP2 0x0 00:18:22.100 [2024-07-25 13:14:14.092652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.100 [2024-07-25 13:14:14.092662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.100 [2024-07-25 13:14:14.092670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.100 [2024-07-25 13:14:14.092679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64968 len:8 PRP1 0x0 PRP2 0x0 00:18:22.100 [2024-07-25 13:14:14.092688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.100 [2024-07-25 13:14:14.092697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.100 [2024-07-25 13:14:14.092705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.100 [2024-07-25 13:14:14.092728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64976 len:8 PRP1 0x0 PRP2 0x0 00:18:22.100 [2024-07-25 13:14:14.092738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.100 [2024-07-25 13:14:14.092748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.100 [2024-07-25 13:14:14.092756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.100 [2024-07-25 13:14:14.092764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64984 len:8 PRP1 0x0 PRP2 0x0 00:18:22.100 [2024-07-25 13:14:14.092788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.100 [2024-07-25 13:14:14.092800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.100 [2024-07-25 13:14:14.092808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.100 [2024-07-25 13:14:14.092817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64992 len:8 PRP1 0x0 PRP2 0x0 00:18:22.100 [2024-07-25 13:14:14.092826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.100 [2024-07-25 13:14:14.092846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.100 [2024-07-25 13:14:14.092855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.100 [2024-07-25 13:14:14.092863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65000 len:8 PRP1 0x0 PRP2 0x0 00:18:22.100 [2024-07-25 13:14:14.092872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.100 [2024-07-25 13:14:14.092882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.100 [2024-07-25 13:14:14.092891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.100 [2024-07-25 13:14:14.092899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65008 len:8 PRP1 0x0 PRP2 0x0 00:18:22.100 [2024-07-25 13:14:14.092908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.100 [2024-07-25 13:14:14.092917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.100 [2024-07-25 13:14:14.092925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.100 [2024-07-25 13:14:14.092934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65016 len:8 PRP1 0x0 PRP2 0x0 00:18:22.100 [2024-07-25 13:14:14.092943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.100 [2024-07-25 13:14:14.092952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.100 [2024-07-25 13:14:14.092960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.100 [2024-07-25 13:14:14.092968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65024 len:8 PRP1 0x0 PRP2 0x0 00:18:22.100 [2024-07-25 13:14:14.092977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.100 [2024-07-25 13:14:14.092986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.100 [2024-07-25 13:14:14.092997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.100 [2024-07-25 13:14:14.093005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65032 len:8 PRP1 0x0 PRP2 0x0 00:18:22.100 [2024-07-25 13:14:14.093014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.100 [2024-07-25 13:14:14.093023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.100 [2024-07-25 13:14:14.093032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.100 [2024-07-25 13:14:14.093046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65040 len:8 PRP1 0x0 PRP2 0x0 00:18:22.100 [2024-07-25 13:14:14.093055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.100 [2024-07-25 13:14:14.093065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.100 [2024-07-25 13:14:14.093073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.100 [2024-07-25 13:14:14.093081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65048 len:8 PRP1 0x0 PRP2 0x0 00:18:22.100 [2024-07-25 13:14:14.093090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.100 [2024-07-25 13:14:14.093099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.100 [2024-07-25 13:14:14.093107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.100 [2024-07-25 13:14:14.093115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65056 len:8 PRP1 0x0 PRP2 0x0 00:18:22.100 [2024-07-25 13:14:14.093124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.100 [2024-07-25 13:14:14.093134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.100 [2024-07-25 13:14:14.093142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.100 [2024-07-25 13:14:14.093150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65064 len:8 PRP1 0x0 PRP2 0x0 00:18:22.100 [2024-07-25 13:14:14.093159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.100 [2024-07-25 13:14:14.093169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.100 [2024-07-25 13:14:14.093177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.100 [2024-07-25 13:14:14.093186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65072 len:8 PRP1 0x0 PRP2 0x0 00:18:22.100 [2024-07-25 13:14:14.093195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.100 [2024-07-25 13:14:14.093205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.100 [2024-07-25 13:14:14.093213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.100 [2024-07-25 13:14:14.093221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65080 len:8 PRP1 0x0 PRP2 0x0 00:18:22.100 [2024-07-25 13:14:14.093230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.100 [2024-07-25 13:14:14.093240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.100 [2024-07-25 13:14:14.093248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.100 [2024-07-25 13:14:14.093256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65088 len:8 PRP1 0x0 PRP2 0x0 00:18:22.100 [2024-07-25 13:14:14.093266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.100 [2024-07-25 13:14:14.093275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.100 [2024-07-25 13:14:14.093283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.100 [2024-07-25 13:14:14.093292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65096 len:8 PRP1 0x0 PRP2 0x0 00:18:22.100 [2024-07-25 13:14:14.093305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.100 [2024-07-25 13:14:14.093314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.100 [2024-07-25 13:14:14.093322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.100 [2024-07-25 13:14:14.093335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65104 len:8 PRP1 0x0 PRP2 0x0 00:18:22.100 [2024-07-25 13:14:14.093344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.101 [2024-07-25 13:14:14.093354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.101 [2024-07-25 13:14:14.093362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.101 [2024-07-25 13:14:14.093371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65112 len:8 PRP1 0x0 PRP2 0x0 00:18:22.101 [2024-07-25 13:14:14.093380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.101 [2024-07-25 13:14:14.093389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.101 [2024-07-25 13:14:14.093398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.101 [2024-07-25 13:14:14.093406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65120 len:8 PRP1 0x0 PRP2 0x0 00:18:22.101 [2024-07-25 13:14:14.093416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.101 [2024-07-25 13:14:14.093425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.101 [2024-07-25 13:14:14.093433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.101 [2024-07-25 13:14:14.093441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65128 len:8 PRP1 0x0 PRP2 0x0 00:18:22.101 [2024-07-25 13:14:14.093450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.101 [2024-07-25 13:14:14.093460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.101 [2024-07-25 13:14:14.093474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.101 [2024-07-25 13:14:14.093482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65136 len:8 PRP1 0x0 PRP2 0x0 00:18:22.101 [2024-07-25 13:14:14.093491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.101 [2024-07-25 13:14:14.093501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.101 [2024-07-25 13:14:14.093509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.101 [2024-07-25 13:14:14.093517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65144 len:8 PRP1 0x0 PRP2 0x0 00:18:22.101 [2024-07-25 13:14:14.093526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.101 [2024-07-25 13:14:14.093535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.101 [2024-07-25 13:14:14.093543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.101 [2024-07-25 13:14:14.093551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65152 len:8 PRP1 0x0 PRP2 0x0 00:18:22.101 [2024-07-25 13:14:14.093560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.101 [2024-07-25 13:14:14.093570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.101 [2024-07-25 13:14:14.093577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.101 [2024-07-25 13:14:14.093586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65160 len:8 PRP1 0x0 PRP2 0x0 00:18:22.101 [2024-07-25 13:14:14.093595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.101 [2024-07-25 13:14:14.093604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.101 [2024-07-25 13:14:14.093612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.101 [2024-07-25 13:14:14.093624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65168 len:8 PRP1 0x0 PRP2 0x0 00:18:22.101 [2024-07-25 13:14:14.093634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.101 [2024-07-25 13:14:14.093643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.101 [2024-07-25 13:14:14.093651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.101 [2024-07-25 13:14:14.093659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65176 len:8 PRP1 0x0 PRP2 0x0 00:18:22.101 [2024-07-25 13:14:14.093668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.101 [2024-07-25 13:14:14.093678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.101 [2024-07-25 13:14:14.093686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.101 [2024-07-25 13:14:14.093694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65184 len:8 PRP1 0x0 PRP2 0x0 00:18:22.101 [2024-07-25 13:14:14.093703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.101 [2024-07-25 13:14:14.093712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.101 [2024-07-25 13:14:14.093720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.101 [2024-07-25 13:14:14.093728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65192 len:8 PRP1 0x0 PRP2 0x0 00:18:22.101 [2024-07-25 13:14:14.093737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.101 [2024-07-25 13:14:14.093747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.101 [2024-07-25 13:14:14.093759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.101 [2024-07-25 13:14:14.093768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65200 len:8 PRP1 0x0 PRP2 0x0 00:18:22.101 [2024-07-25 13:14:14.093777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.101 [2024-07-25 13:14:14.093786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.101 [2024-07-25 13:14:14.093794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.101 [2024-07-25 13:14:14.093802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65208 len:8 PRP1 0x0 PRP2 0x0 00:18:22.101 [2024-07-25 13:14:14.093810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.101 [2024-07-25 13:14:14.093820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.101 [2024-07-25 13:14:14.093828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.101 [2024-07-25 13:14:14.093846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65216 len:8 PRP1 0x0 PRP2 0x0 00:18:22.101 [2024-07-25 13:14:14.093855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.101 [2024-07-25 13:14:14.093865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.101 [2024-07-25 13:14:14.093874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.101 [2024-07-25 13:14:14.093882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65224 len:8 PRP1 0x0 PRP2 0x0 00:18:22.101 [2024-07-25 13:14:14.093891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.101 [2024-07-25 13:14:14.093900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.101 [2024-07-25 13:14:14.093908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.101 [2024-07-25 13:14:14.093922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65232 len:8 PRP1 0x0 PRP2 0x0 00:18:22.101 [2024-07-25 13:14:14.093932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.101 [2024-07-25 13:14:14.093942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.101 [2024-07-25 13:14:14.093950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.101 [2024-07-25 13:14:14.093958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65240 len:8 PRP1 0x0 PRP2 0x0 00:18:22.101 [2024-07-25 13:14:14.093967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.101 [2024-07-25 13:14:14.093976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.101 [2024-07-25 13:14:14.093984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.101 [2024-07-25 13:14:14.093992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65248 len:8 PRP1 0x0 PRP2 0x0 00:18:22.101 [2024-07-25 13:14:14.094001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.101 [2024-07-25 13:14:14.094011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.101 [2024-07-25 13:14:14.094019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.101 [2024-07-25 13:14:14.094026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65256 len:8 PRP1 0x0 PRP2 0x0 00:18:22.101 [2024-07-25 13:14:14.094035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.101 [2024-07-25 13:14:14.094045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.101 [2024-07-25 13:14:14.094058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.101 [2024-07-25 13:14:14.094066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65264 len:8 PRP1 0x0 PRP2 0x0 00:18:22.101 [2024-07-25 13:14:14.094075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.101 [2024-07-25 13:14:14.094085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.101 [2024-07-25 13:14:14.094093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.101 [2024-07-25 13:14:14.094101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65272 len:8 PRP1 0x0 PRP2 0x0 00:18:22.101 [2024-07-25 13:14:14.094110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.101 [2024-07-25 13:14:14.094120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.101 [2024-07-25 13:14:14.094128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.101 [2024-07-25 13:14:14.094136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65280 len:8 PRP1 0x0 PRP2 0x0 00:18:22.102 [2024-07-25 13:14:14.094145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.102 [2024-07-25 13:14:14.094155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.102 [2024-07-25 13:14:14.094163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.102 [2024-07-25 13:14:14.094171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65288 len:8 PRP1 0x0 PRP2 0x0 00:18:22.102 [2024-07-25 13:14:14.094180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.102 [2024-07-25 13:14:14.094189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.102 [2024-07-25 13:14:14.094197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.102 [2024-07-25 13:14:14.094210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65296 len:8 PRP1 0x0 PRP2 0x0 00:18:22.102 [2024-07-25 13:14:14.094220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.102 [2024-07-25 13:14:14.094229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.102 [2024-07-25 13:14:14.094237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.102 [2024-07-25 13:14:14.094245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65304 len:8 PRP1 0x0 PRP2 0x0 00:18:22.102 13:14:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:18:22.102 [2024-07-25 13:14:14.108015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.102 [2024-07-25 13:14:14.108052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.102 [2024-07-25 13:14:14.108064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.102 [2024-07-25 13:14:14.108074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65312 len:8 PRP1 0x0 PRP2 0x0 00:18:22.102 [2024-07-25 13:14:14.108084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.102 [2024-07-25 13:14:14.108094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.102 [2024-07-25 13:14:14.108103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.102 [2024-07-25 13:14:14.108111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65320 len:8 PRP1 0x0 PRP2 0x0 00:18:22.102 [2024-07-25 13:14:14.108121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.102 [2024-07-25 13:14:14.108130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.102 [2024-07-25 13:14:14.108139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.102 [2024-07-25 13:14:14.108148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65328 len:8 PRP1 0x0 PRP2 0x0 00:18:22.102 [2024-07-25 13:14:14.108156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.102 [2024-07-25 13:14:14.108166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.102 [2024-07-25 13:14:14.108174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.102 [2024-07-25 13:14:14.108182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65336 len:8 PRP1 0x0 PRP2 0x0 00:18:22.102 [2024-07-25 13:14:14.108191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.102 [2024-07-25 13:14:14.108201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.102 [2024-07-25 13:14:14.108208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.102 [2024-07-25 13:14:14.108216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65344 len:8 PRP1 0x0 PRP2 0x0 00:18:22.102 [2024-07-25 13:14:14.108225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.102 [2024-07-25 13:14:14.108235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.102 [2024-07-25 13:14:14.108242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.102 [2024-07-25 13:14:14.108251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65352 len:8 PRP1 0x0 PRP2 0x0 00:18:22.102 [2024-07-25 13:14:14.108260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.102 [2024-07-25 13:14:14.108269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.102 [2024-07-25 13:14:14.108277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.102 [2024-07-25 13:14:14.108286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65360 len:8 PRP1 0x0 PRP2 0x0 00:18:22.102 [2024-07-25 13:14:14.108295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.102 [2024-07-25 13:14:14.108305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.102 [2024-07-25 13:14:14.108312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.102 [2024-07-25 13:14:14.108320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65368 len:8 PRP1 0x0 PRP2 0x0 00:18:22.102 [2024-07-25 13:14:14.108329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.102 [2024-07-25 13:14:14.108339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.102 [2024-07-25 13:14:14.108347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.102 [2024-07-25 13:14:14.108355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65376 len:8 PRP1 0x0 PRP2 0x0 00:18:22.102 [2024-07-25 13:14:14.108364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.102 [2024-07-25 13:14:14.108374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.102 [2024-07-25 13:14:14.108381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.102 [2024-07-25 13:14:14.108390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65384 len:8 PRP1 0x0 PRP2 0x0 00:18:22.102 [2024-07-25 13:14:14.108398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.102 [2024-07-25 13:14:14.108408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.102 [2024-07-25 13:14:14.108416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.102 [2024-07-25 13:14:14.108424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65392 len:8 PRP1 0x0 PRP2 0x0 00:18:22.102 [2024-07-25 13:14:14.108434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.102 [2024-07-25 13:14:14.108443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.102 [2024-07-25 13:14:14.108451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.102 [2024-07-25 13:14:14.108459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65400 len:8 PRP1 0x0 PRP2 0x0 00:18:22.102 [2024-07-25 13:14:14.108468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.102 [2024-07-25 13:14:14.108478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.102 [2024-07-25 13:14:14.108485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.102 [2024-07-25 13:14:14.108493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65408 len:8 PRP1 0x0 PRP2 0x0 00:18:22.102 [2024-07-25 13:14:14.108502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.102 [2024-07-25 13:14:14.108511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.102 [2024-07-25 13:14:14.108519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.102 [2024-07-25 13:14:14.108527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65416 len:8 PRP1 0x0 PRP2 0x0 00:18:22.102 [2024-07-25 13:14:14.108536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.102 [2024-07-25 13:14:14.108546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.102 [2024-07-25 13:14:14.108553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.102 [2024-07-25 13:14:14.108561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65424 len:8 PRP1 0x0 PRP2 0x0 00:18:22.102 [2024-07-25 13:14:14.108570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.102 [2024-07-25 13:14:14.108580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.102 [2024-07-25 13:14:14.108588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.102 [2024-07-25 13:14:14.108596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65432 len:8 PRP1 0x0 PRP2 0x0 00:18:22.102 [2024-07-25 13:14:14.108605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.102 [2024-07-25 13:14:14.108614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.102 [2024-07-25 13:14:14.108622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.102 [2024-07-25 13:14:14.108630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65440 len:8 PRP1 0x0 PRP2 0x0 00:18:22.102 [2024-07-25 13:14:14.108638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.102 [2024-07-25 13:14:14.108648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.102 [2024-07-25 13:14:14.108656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.102 [2024-07-25 13:14:14.108664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65448 len:8 PRP1 0x0 PRP2 0x0 00:18:22.102 [2024-07-25 13:14:14.108672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.102 [2024-07-25 13:14:14.108681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.102 [2024-07-25 13:14:14.108689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.102 [2024-07-25 13:14:14.108697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65456 len:8 PRP1 0x0 PRP2 0x0 00:18:22.102 [2024-07-25 13:14:14.108706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.103 [2024-07-25 13:14:14.108716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.103 [2024-07-25 13:14:14.108723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.103 [2024-07-25 13:14:14.108731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65464 len:8 PRP1 0x0 PRP2 0x0 00:18:22.103 [2024-07-25 13:14:14.108740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.103 [2024-07-25 13:14:14.108750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.103 [2024-07-25 13:14:14.108757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.103 [2024-07-25 13:14:14.108765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65472 len:8 PRP1 0x0 PRP2 0x0 00:18:22.103 [2024-07-25 13:14:14.108790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.103 [2024-07-25 13:14:14.108800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.103 [2024-07-25 13:14:14.108808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.103 [2024-07-25 13:14:14.108816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65480 len:8 PRP1 0x0 PRP2 0x0 00:18:22.103 [2024-07-25 13:14:14.108825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.103 [2024-07-25 13:14:14.108851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.103 [2024-07-25 13:14:14.108860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.103 [2024-07-25 13:14:14.108868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65488 len:8 PRP1 0x0 PRP2 0x0 00:18:22.103 [2024-07-25 13:14:14.108877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.103 [2024-07-25 13:14:14.108887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.103 [2024-07-25 13:14:14.108895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.103 [2024-07-25 13:14:14.108902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65496 len:8 PRP1 0x0 PRP2 0x0 00:18:22.103 [2024-07-25 13:14:14.108911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.103 [2024-07-25 13:14:14.108921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.103 [2024-07-25 13:14:14.108929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.103 [2024-07-25 13:14:14.108936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65504 len:8 PRP1 0x0 PRP2 0x0 00:18:22.103 [2024-07-25 13:14:14.108945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.103 [2024-07-25 13:14:14.108955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.103 [2024-07-25 13:14:14.108962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.103 [2024-07-25 13:14:14.108971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65512 len:8 PRP1 0x0 PRP2 0x0 00:18:22.103 [2024-07-25 13:14:14.108979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.103 [2024-07-25 13:14:14.108989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.103 [2024-07-25 13:14:14.108996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.103 [2024-07-25 13:14:14.109011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65520 len:8 PRP1 0x0 PRP2 0x0 00:18:22.103 [2024-07-25 13:14:14.109020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.103 [2024-07-25 13:14:14.109030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.103 [2024-07-25 13:14:14.109038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.103 [2024-07-25 13:14:14.109046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65528 len:8 PRP1 0x0 PRP2 0x0 00:18:22.103 [2024-07-25 13:14:14.109056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.103 [2024-07-25 13:14:14.109065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.103 [2024-07-25 13:14:14.109073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.103 [2024-07-25 13:14:14.109081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65536 len:8 PRP1 0x0 PRP2 0x0 00:18:22.103 [2024-07-25 13:14:14.109090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.103 [2024-07-25 13:14:14.109100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.103 [2024-07-25 13:14:14.109108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.103 [2024-07-25 13:14:14.109121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65544 len:8 PRP1 0x0 PRP2 0x0 00:18:22.103 [2024-07-25 13:14:14.109130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.103 [2024-07-25 13:14:14.109139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.103 [2024-07-25 13:14:14.109147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.103 [2024-07-25 13:14:14.109155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65552 len:8 PRP1 0x0 PRP2 0x0 00:18:22.103 [2024-07-25 13:14:14.109164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.103 [2024-07-25 13:14:14.109178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.103 [2024-07-25 13:14:14.109186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.103 [2024-07-25 13:14:14.109194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65560 len:8 PRP1 0x0 PRP2 0x0 00:18:22.103 [2024-07-25 13:14:14.109203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.103 [2024-07-25 13:14:14.109213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.103 [2024-07-25 13:14:14.109220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.103 [2024-07-25 13:14:14.109228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65568 len:8 PRP1 0x0 PRP2 0x0 00:18:22.103 [2024-07-25 13:14:14.109253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.103 [2024-07-25 13:14:14.109267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.103 [2024-07-25 13:14:14.109278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.103 [2024-07-25 13:14:14.109290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65576 len:8 PRP1 0x0 PRP2 0x0 00:18:22.103 [2024-07-25 13:14:14.109302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.103 [2024-07-25 13:14:14.109316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.103 [2024-07-25 13:14:14.109328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.103 [2024-07-25 13:14:14.109345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65584 len:8 PRP1 0x0 PRP2 0x0 00:18:22.104 [2024-07-25 13:14:14.109358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.104 [2024-07-25 13:14:14.109372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.104 [2024-07-25 13:14:14.109383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.104 [2024-07-25 13:14:14.109395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65592 len:8 PRP1 0x0 PRP2 0x0 00:18:22.104 [2024-07-25 13:14:14.109408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.104 [2024-07-25 13:14:14.109421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.104 [2024-07-25 13:14:14.109432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.104 [2024-07-25 13:14:14.109444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65600 len:8 PRP1 0x0 PRP2 0x0 00:18:22.104 [2024-07-25 13:14:14.109457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.104 [2024-07-25 13:14:14.109470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.104 [2024-07-25 13:14:14.109481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.104 [2024-07-25 13:14:14.109493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65608 len:8 PRP1 0x0 PRP2 0x0 00:18:22.104 [2024-07-25 13:14:14.109506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.104 [2024-07-25 13:14:14.109519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.104 [2024-07-25 13:14:14.109530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.104 [2024-07-25 13:14:14.109542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65616 len:8 PRP1 0x0 PRP2 0x0 00:18:22.104 [2024-07-25 13:14:14.109555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.104 [2024-07-25 13:14:14.109569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.104 [2024-07-25 13:14:14.109580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.104 [2024-07-25 13:14:14.109591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65624 len:8 PRP1 0x0 PRP2 0x0 00:18:22.104 [2024-07-25 13:14:14.109604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.104 [2024-07-25 13:14:14.109618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.104 [2024-07-25 13:14:14.109629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.104 [2024-07-25 13:14:14.109640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65632 len:8 PRP1 0x0 PRP2 0x0 00:18:22.104 [2024-07-25 13:14:14.109653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.104 [2024-07-25 13:14:14.109667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.104 [2024-07-25 13:14:14.109678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.104 [2024-07-25 13:14:14.109689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65640 len:8 PRP1 0x0 PRP2 0x0 00:18:22.104 [2024-07-25 13:14:14.109702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.104 [2024-07-25 13:14:14.109716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.104 [2024-07-25 13:14:14.109727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.104 [2024-07-25 13:14:14.109744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65648 len:8 PRP1 0x0 PRP2 0x0 00:18:22.104 [2024-07-25 13:14:14.109757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.104 [2024-07-25 13:14:14.109771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.104 [2024-07-25 13:14:14.109782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.104 [2024-07-25 13:14:14.109794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65656 len:8 PRP1 0x0 PRP2 0x0 00:18:22.104 [2024-07-25 13:14:14.109807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.104 [2024-07-25 13:14:14.109820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.104 [2024-07-25 13:14:14.109831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.104 [2024-07-25 13:14:14.109857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65664 len:8 PRP1 0x0 PRP2 0x0 00:18:22.104 [2024-07-25 13:14:14.109872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.104 [2024-07-25 13:14:14.109886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.104 [2024-07-25 13:14:14.109897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.104 [2024-07-25 13:14:14.109908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65672 len:8 PRP1 0x0 PRP2 0x0 00:18:22.104 [2024-07-25 13:14:14.109921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.104 [2024-07-25 13:14:14.109935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.104 [2024-07-25 13:14:14.109946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.104 [2024-07-25 13:14:14.109964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64680 len:8 PRP1 0x0 PRP2 0x0 00:18:22.104 [2024-07-25 13:14:14.109977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.104 [2024-07-25 13:14:14.109991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.104 [2024-07-25 13:14:14.110002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.104 [2024-07-25 13:14:14.110014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64688 len:8 PRP1 0x0 PRP2 0x0 00:18:22.104 [2024-07-25 13:14:14.110027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.104 [2024-07-25 13:14:14.110041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.104 [2024-07-25 13:14:14.110052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.104 [2024-07-25 13:14:14.110063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64696 len:8 PRP1 0x0 PRP2 0x0 00:18:22.104 [2024-07-25 13:14:14.110076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.104 [2024-07-25 13:14:14.110090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.104 [2024-07-25 13:14:14.110101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.104 [2024-07-25 13:14:14.110112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64704 len:8 PRP1 0x0 PRP2 0x0 00:18:22.104 [2024-07-25 13:14:14.110125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.104 [2024-07-25 13:14:14.110139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.104 [2024-07-25 13:14:14.110150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.104 [2024-07-25 13:14:14.110178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64712 len:8 PRP1 0x0 PRP2 0x0 00:18:22.104 [2024-07-25 13:14:14.110191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.104 [2024-07-25 13:14:14.110205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.104 [2024-07-25 13:14:14.110216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.104 [2024-07-25 13:14:14.110228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64720 len:8 PRP1 0x0 PRP2 0x0 00:18:22.104 [2024-07-25 13:14:14.110241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.104 [2024-07-25 13:14:14.110255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.104 [2024-07-25 13:14:14.110266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.104 [2024-07-25 13:14:14.110277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64728 len:8 PRP1 0x0 PRP2 0x0 00:18:22.104 [2024-07-25 13:14:14.110290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.104 [2024-07-25 13:14:14.110363] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c501b0 was disconnected and freed. reset controller. 00:18:22.104 [2024-07-25 13:14:14.110491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.104 [2024-07-25 13:14:14.110532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.104 [2024-07-25 13:14:14.110550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.104 [2024-07-25 13:14:14.110569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.104 [2024-07-25 13:14:14.110584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.104 [2024-07-25 13:14:14.110603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.104 [2024-07-25 13:14:14.118306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.104 [2024-07-25 13:14:14.118377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.104 [2024-07-25 13:14:14.118395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdfd40 is same with the state(5) to be set 00:18:22.104 [2024-07-25 13:14:14.118758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:22.104 [2024-07-25 13:14:14.118805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdfd40 (9): Bad file descriptor 00:18:22.104 [2024-07-25 13:14:14.118981] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:22.104 [2024-07-25 13:14:14.119013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdfd40 with addr=10.0.0.2, port=4420 00:18:22.104 [2024-07-25 13:14:14.119029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdfd40 is same with the state(5) to be set 00:18:22.105 [2024-07-25 13:14:14.119056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdfd40 (9): Bad file descriptor 00:18:22.105 [2024-07-25 13:14:14.119079] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:22.105 [2024-07-25 13:14:14.119093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:22.105 [2024-07-25 13:14:14.119108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:22.105 [2024-07-25 13:14:14.119137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:22.105 [2024-07-25 13:14:14.119153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:24.005 13:14:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:18:24.005 13:14:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:24.005 13:14:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:24.005 [2024-07-25 13:14:16.119340] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:24.005 [2024-07-25 13:14:16.119392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdfd40 with addr=10.0.0.2, port=4420 00:18:24.005 [2024-07-25 13:14:16.119410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdfd40 is same with the state(5) to be set 00:18:24.005 [2024-07-25 13:14:16.119436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdfd40 (9): Bad file descriptor 00:18:24.005 [2024-07-25 13:14:16.119457] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:24.005 [2024-07-25 13:14:16.119469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:24.005 [2024-07-25 13:14:16.119480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:24.005 [2024-07-25 13:14:16.119508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:24.005 [2024-07-25 13:14:16.119521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:24.263 13:14:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:18:24.263 13:14:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:18:24.263 13:14:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:24.263 13:14:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:24.524 13:14:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:18:24.524 13:14:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:18:26.436 [2024-07-25 13:14:18.119789] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:26.436 [2024-07-25 13:14:18.119896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdfd40 with addr=10.0.0.2, port=4420 00:18:26.436 [2024-07-25 13:14:18.119915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdfd40 is same with the state(5) to be set 00:18:26.436 [2024-07-25 13:14:18.119943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdfd40 (9): Bad file descriptor 00:18:26.436 [2024-07-25 13:14:18.119963] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:26.436 [2024-07-25 13:14:18.119974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:26.436 [2024-07-25 13:14:18.119986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:26.436 [2024-07-25 13:14:18.120014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:26.436 [2024-07-25 13:14:18.120026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:28.337 [2024-07-25 13:14:20.120158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:28.337 [2024-07-25 13:14:20.120230] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:28.337 [2024-07-25 13:14:20.120244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:28.337 [2024-07-25 13:14:20.120270] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:18:28.337 [2024-07-25 13:14:20.120314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:29.268 00:18:29.269 Latency(us) 00:18:29.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.269 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:29.269 Verification LBA range: start 0x0 length 0x4000 00:18:29.269 NVMe0n1 : 8.19 986.80 3.85 15.63 0.00 127628.80 4051.32 7046430.72 00:18:29.269 =================================================================================================================== 00:18:29.269 Total : 986.80 3.85 15.63 0.00 127628.80 4051.32 7046430.72 00:18:29.269 0 00:18:29.526 13:14:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:18:29.526 13:14:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:29.526 13:14:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:29.784 13:14:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:18:29.784 13:14:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:18:29.784 13:14:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:29.784 13:14:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:30.042 13:14:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:18:30.042 13:14:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 81098 00:18:30.042 13:14:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81073 00:18:30.042 13:14:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 81073 ']' 00:18:30.042 13:14:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 81073 00:18:30.042 13:14:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:18:30.042 13:14:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:30.042 13:14:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81073 00:18:30.042 13:14:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:30.042 13:14:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:30.042 killing process with pid 81073 00:18:30.042 13:14:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81073' 00:18:30.042 Received shutdown signal, test time was about 9.298455 seconds 00:18:30.042 00:18:30.042 Latency(us) 00:18:30.042 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.042 =================================================================================================================== 00:18:30.042 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:30.042 13:14:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 81073 00:18:30.042 13:14:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 81073 00:18:30.300 13:14:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:30.558 [2024-07-25 13:14:22.689514] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:30.558 13:14:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=81221 00:18:30.558 13:14:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:30.558 13:14:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 81221 /var/tmp/bdevperf.sock 00:18:30.558 13:14:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 81221 ']' 00:18:30.558 13:14:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:30.558 13:14:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:30.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:30.558 13:14:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:30.558 13:14:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:30.558 13:14:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:30.815 [2024-07-25 13:14:22.756092] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:30.815 [2024-07-25 13:14:22.756172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81221 ] 00:18:30.815 [2024-07-25 13:14:22.890382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.815 [2024-07-25 13:14:22.998517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.073 [2024-07-25 13:14:23.053744] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:31.639 13:14:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:31.639 13:14:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:18:31.639 13:14:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:31.898 13:14:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:18:32.156 NVMe0n1 00:18:32.156 13:14:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=81243 00:18:32.156 13:14:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:32.156 13:14:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:18:32.156 Running I/O for 10 seconds... 00:18:33.089 13:14:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:33.351 [2024-07-25 13:14:25.482229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482375] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482404] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482430] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482498] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482633] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.351 [2024-07-25 13:14:25.482737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.482745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.482753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.482761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.482769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.482777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.482785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.482792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.482800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.482808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.482816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.482824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.482845] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.482855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.482864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.482873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.482881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.482890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.482898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.482907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.482916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.482924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.482932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.482940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.482949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.482957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.482965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.482973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.482981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.482989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.482996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.483004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.483012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.483020] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.483029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.483037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.483045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.483053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.483060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.483068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.483077] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.483084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.483092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.483100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.483108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.483118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.483128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.483137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.483145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.483153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.483162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.483170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.483178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.483186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.483195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.483202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.483210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1eca0 is same with the state(5) to be set 00:18:33.352 [2024-07-25 13:14:25.483286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.352 [2024-07-25 13:14:25.483315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.352 [2024-07-25 13:14:25.483336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.352 [2024-07-25 13:14:25.483348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.352 [2024-07-25 13:14:25.483360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.352 [2024-07-25 13:14:25.483370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.352 [2024-07-25 13:14:25.483382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.352 [2024-07-25 13:14:25.483392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.352 [2024-07-25 13:14:25.483404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.352 [2024-07-25 13:14:25.483414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.352 [2024-07-25 13:14:25.483426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.352 [2024-07-25 13:14:25.483436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.352 [2024-07-25 13:14:25.483448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.352 [2024-07-25 13:14:25.483458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.352 [2024-07-25 13:14:25.483469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.352 [2024-07-25 13:14:25.483479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.352 [2024-07-25 13:14:25.483491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.352 [2024-07-25 13:14:25.483501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.352 [2024-07-25 13:14:25.483513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.352 [2024-07-25 13:14:25.483523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.352 [2024-07-25 13:14:25.483535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.352 [2024-07-25 13:14:25.483544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.352 [2024-07-25 13:14:25.483556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.352 [2024-07-25 13:14:25.483566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.352 [2024-07-25 13:14:25.483578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.352 [2024-07-25 13:14:25.483587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.483599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.483608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.483620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.483632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.483644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.483654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.483665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.483677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.483697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.483707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.483719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.483728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.483740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.483750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.483761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.483771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.483783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.483796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.483808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.483818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.483841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.483853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.483868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.483878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.483890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.483900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.483912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.483922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.483934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.483943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.483955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.483965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.483976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.483986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.483998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.484008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.484019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.484029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.484041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.484050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.484067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.484077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.484089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.484099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.484110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.484120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.484132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.484141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.484153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.484163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.484174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.484184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.484197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.484207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.484218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.484228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.484240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.484250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.484262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.484271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.484283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.484293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.484305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.484314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.484326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.484336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.484347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.484357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.484368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.484378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.484390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.484399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.484416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.484426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.484438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.484448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.484459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.353 [2024-07-25 13:14:25.484469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.353 [2024-07-25 13:14:25.484481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.484491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.484503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.484512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.484524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.484534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.484545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.484555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.484567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.484576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.484588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.484598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.484609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.484619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.484630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.484640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.484652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.484661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.484673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.484683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.484694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.484704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.484715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.484725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.484737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.484747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.484763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.484783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.484797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.484807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.484826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.484846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.484858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.484869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.484880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.484890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.484902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.484912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.484923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.484933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.484945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.484954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.484966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.484976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.484988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.484998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.485009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.485019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.485031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.485041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.485052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.485062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.485074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.485084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.485096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.485105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.485117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.485127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.485144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.485154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.485166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.485176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.485192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.485202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.485214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.485224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.485236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.485246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.485258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.485268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.485279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.485289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.485301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.354 [2024-07-25 13:14:25.485319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.354 [2024-07-25 13:14:25.485331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.355 [2024-07-25 13:14:25.485341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.485353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.355 [2024-07-25 13:14:25.485363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.485375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.355 [2024-07-25 13:14:25.485385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.485397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:63488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.355 [2024-07-25 13:14:25.485406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.485418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.355 [2024-07-25 13:14:25.485428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.485440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.355 [2024-07-25 13:14:25.485450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.485461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.355 [2024-07-25 13:14:25.485471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.485482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.355 [2024-07-25 13:14:25.485492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.485509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.355 [2024-07-25 13:14:25.485519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.485530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.355 [2024-07-25 13:14:25.485540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.485556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.355 [2024-07-25 13:14:25.485566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.485578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.355 [2024-07-25 13:14:25.485588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.485600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.355 [2024-07-25 13:14:25.485609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.485621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.355 [2024-07-25 13:14:25.485631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.485643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.355 [2024-07-25 13:14:25.485652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.485665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.355 [2024-07-25 13:14:25.485675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.485686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.355 [2024-07-25 13:14:25.485696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.485708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.355 [2024-07-25 13:14:25.485718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.485730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.355 [2024-07-25 13:14:25.485739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.485751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.355 [2024-07-25 13:14:25.485761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.485773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.355 [2024-07-25 13:14:25.485782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.485794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.355 [2024-07-25 13:14:25.485804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.485816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.355 [2024-07-25 13:14:25.485826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.485849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.355 [2024-07-25 13:14:25.485859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.485876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.355 [2024-07-25 13:14:25.485886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.485898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.355 [2024-07-25 13:14:25.485907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.485924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.355 [2024-07-25 13:14:25.485934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.485946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.355 [2024-07-25 13:14:25.485955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.485967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.355 [2024-07-25 13:14:25.485977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.485988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.355 [2024-07-25 13:14:25.485998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.486009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.355 [2024-07-25 13:14:25.486019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.486031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.355 [2024-07-25 13:14:25.486040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.486052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.355 [2024-07-25 13:14:25.486062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.355 [2024-07-25 13:14:25.486073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.356 [2024-07-25 13:14:25.486083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.356 [2024-07-25 13:14:25.486094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.356 [2024-07-25 13:14:25.486104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.356 [2024-07-25 13:14:25.486116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.356 [2024-07-25 13:14:25.486126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.356 [2024-07-25 13:14:25.486137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.356 [2024-07-25 13:14:25.486147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.356 [2024-07-25 13:14:25.486158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:33.356 [2024-07-25 13:14:25.486168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.356 [2024-07-25 13:14:25.486179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c701b0 is same with the state(5) to be set 00:18:33.356 [2024-07-25 13:14:25.486192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:33.356 [2024-07-25 13:14:25.486200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:33.356 [2024-07-25 13:14:25.486209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63648 len:8 PRP1 0x0 PRP2 0x0 00:18:33.356 [2024-07-25 13:14:25.486223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.356 [2024-07-25 13:14:25.486277] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c701b0 was disconnected and freed. reset controller. 00:18:33.356 [2024-07-25 13:14:25.486359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.356 [2024-07-25 13:14:25.486387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.356 [2024-07-25 13:14:25.486405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.356 [2024-07-25 13:14:25.486415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.356 [2024-07-25 13:14:25.486425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.356 [2024-07-25 13:14:25.486435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.356 [2024-07-25 13:14:25.486445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.356 [2024-07-25 13:14:25.486455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.356 [2024-07-25 13:14:25.486464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd40 is same with the state(5) to be set 00:18:33.356 [2024-07-25 13:14:25.486694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:33.356 [2024-07-25 13:14:25.486728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bffd40 (9): Bad file descriptor 00:18:33.356 [2024-07-25 13:14:25.486825] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:33.356 [2024-07-25 13:14:25.486859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bffd40 with addr=10.0.0.2, port=4420 00:18:33.356 [2024-07-25 13:14:25.486870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd40 is same with the state(5) to be set 00:18:33.356 [2024-07-25 13:14:25.486890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bffd40 (9): Bad file descriptor 00:18:33.356 [2024-07-25 13:14:25.486906] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:33.356 [2024-07-25 13:14:25.486916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:33.356 [2024-07-25 13:14:25.486928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:33.356 [2024-07-25 13:14:25.486948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:33.356 13:14:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:18:33.356 [2024-07-25 13:14:25.501651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.730 [2024-07-25 13:14:26.501858] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.730 [2024-07-25 13:14:26.501980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bffd40 with addr=10.0.0.2, port=4420 00:18:34.730 [2024-07-25 13:14:26.501998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd40 is same with the state(5) to be set 00:18:34.730 [2024-07-25 13:14:26.502027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bffd40 (9): Bad file descriptor 00:18:34.730 [2024-07-25 13:14:26.502048] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.730 [2024-07-25 13:14:26.502059] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:34.730 [2024-07-25 13:14:26.502071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.730 [2024-07-25 13:14:26.502100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:34.730 [2024-07-25 13:14:26.502113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.730 13:14:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:34.730 [2024-07-25 13:14:26.717982] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:34.730 13:14:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 81243 00:18:35.664 [2024-07-25 13:14:27.521627] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:42.223 00:18:42.223 Latency(us) 00:18:42.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.223 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:42.223 Verification LBA range: start 0x0 length 0x4000 00:18:42.223 NVMe0n1 : 10.01 5819.51 22.73 0.00 0.00 21957.24 2189.50 3035150.89 00:18:42.223 =================================================================================================================== 00:18:42.223 Total : 5819.51 22.73 0.00 0.00 21957.24 2189.50 3035150.89 00:18:42.223 0 00:18:42.223 13:14:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=81349 00:18:42.223 13:14:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:42.223 13:14:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:18:42.481 Running I/O for 10 seconds... 00:18:43.417 13:14:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:43.417 [2024-07-25 13:14:35.588012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:43.417 [2024-07-25 13:14:35.588075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.417 [2024-07-25 13:14:35.588090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:43.417 [2024-07-25 13:14:35.588101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.417 [2024-07-25 13:14:35.588112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:43.417 [2024-07-25 13:14:35.588122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.417 [2024-07-25 13:14:35.588133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:43.417 [2024-07-25 13:14:35.588143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.417 [2024-07-25 13:14:35.588154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd40 is same with the state(5) to be set 00:18:43.417 [2024-07-25 13:14:35.588411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.417 [2024-07-25 13:14:35.588429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.417 [2024-07-25 13:14:35.588450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.417 [2024-07-25 13:14:35.588461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.417 [2024-07-25 13:14:35.588473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.417 [2024-07-25 13:14:35.588483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.417 [2024-07-25 13:14:35.588494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.417 [2024-07-25 13:14:35.588504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.417 [2024-07-25 13:14:35.588516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.417 [2024-07-25 13:14:35.588526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.417 [2024-07-25 13:14:35.588537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.417 [2024-07-25 13:14:35.588547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.417 [2024-07-25 13:14:35.588558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.417 [2024-07-25 13:14:35.588568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.417 [2024-07-25 13:14:35.588580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.417 [2024-07-25 13:14:35.588590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.417 [2024-07-25 13:14:35.588601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.417 [2024-07-25 13:14:35.588611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.417 [2024-07-25 13:14:35.588622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.417 [2024-07-25 13:14:35.588632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.417 [2024-07-25 13:14:35.588644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.417 [2024-07-25 13:14:35.588653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.417 [2024-07-25 13:14:35.588665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.417 [2024-07-25 13:14:35.588674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.417 [2024-07-25 13:14:35.588686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.417 [2024-07-25 13:14:35.588699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.417 [2024-07-25 13:14:35.588711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.417 [2024-07-25 13:14:35.588721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.417 [2024-07-25 13:14:35.588732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.417 [2024-07-25 13:14:35.588742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.417 [2024-07-25 13:14:35.588754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.417 [2024-07-25 13:14:35.588764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.417 [2024-07-25 13:14:35.588775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.417 [2024-07-25 13:14:35.588796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.417 [2024-07-25 13:14:35.588808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.417 [2024-07-25 13:14:35.588819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.417 [2024-07-25 13:14:35.588842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.417 [2024-07-25 13:14:35.588854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.417 [2024-07-25 13:14:35.588866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.417 [2024-07-25 13:14:35.588876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.417 [2024-07-25 13:14:35.588887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.417 [2024-07-25 13:14:35.588897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.417 [2024-07-25 13:14:35.588909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.417 [2024-07-25 13:14:35.588918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.417 [2024-07-25 13:14:35.588930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.588940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.588952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.588961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.588973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.588983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.588994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:59888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.418 [2024-07-25 13:14:35.589766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.418 [2024-07-25 13:14:35.589776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.589787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.589797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.589808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.589818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.589838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.589849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.589861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.589872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.589883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.589893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.589905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.589915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.589926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.589936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.589948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.589958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.589970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.589980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.589991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.590001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.590013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.590022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.590034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.590044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.590055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.590065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.590076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.590086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.590099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.590109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.590120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.590130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.590142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.590152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.590163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.590174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.590186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.590195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.590207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.590217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.590228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.590238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.590250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.590260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.590272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.590282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.590293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.590303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.590314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.590324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.590336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.590346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.590357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.590367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.590378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.590388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.590399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.590409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.590420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.590430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.590442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.590459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.590472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.590482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.590493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.590503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.590515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.590525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.590537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.590547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.590558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.590568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.590580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.590589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.590601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.590610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.419 [2024-07-25 13:14:35.590621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.419 [2024-07-25 13:14:35.590631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.420 [2024-07-25 13:14:35.590643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.420 [2024-07-25 13:14:35.590652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.420 [2024-07-25 13:14:35.590664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.420 [2024-07-25 13:14:35.590674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.420 [2024-07-25 13:14:35.590686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.420 [2024-07-25 13:14:35.590695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.420 [2024-07-25 13:14:35.590707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.420 [2024-07-25 13:14:35.590717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.420 [2024-07-25 13:14:35.590728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.420 [2024-07-25 13:14:35.590738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.420 [2024-07-25 13:14:35.590749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.420 [2024-07-25 13:14:35.590759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.420 [2024-07-25 13:14:35.590770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.420 [2024-07-25 13:14:35.590780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.420 [2024-07-25 13:14:35.590791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.420 [2024-07-25 13:14:35.590806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.420 [2024-07-25 13:14:35.590817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.420 [2024-07-25 13:14:35.590827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.420 [2024-07-25 13:14:35.590848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.420 [2024-07-25 13:14:35.590859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.420 [2024-07-25 13:14:35.590871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.420 [2024-07-25 13:14:35.590882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.420 [2024-07-25 13:14:35.590894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.420 [2024-07-25 13:14:35.590903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.420 [2024-07-25 13:14:35.590915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.420 [2024-07-25 13:14:35.590925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.420 [2024-07-25 13:14:35.590937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.420 [2024-07-25 13:14:35.590947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.420 [2024-07-25 13:14:35.590958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.420 [2024-07-25 13:14:35.590968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.420 [2024-07-25 13:14:35.590980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.420 [2024-07-25 13:14:35.590990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.420 [2024-07-25 13:14:35.591001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.420 [2024-07-25 13:14:35.591011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.420 [2024-07-25 13:14:35.591022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.420 [2024-07-25 13:14:35.591032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.420 [2024-07-25 13:14:35.591044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.420 [2024-07-25 13:14:35.591054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.420 [2024-07-25 13:14:35.591066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.420 [2024-07-25 13:14:35.591075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.420 [2024-07-25 13:14:35.591087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.420 [2024-07-25 13:14:35.591097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.420 [2024-07-25 13:14:35.591108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.420 [2024-07-25 13:14:35.591118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.420 [2024-07-25 13:14:35.591130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.420 [2024-07-25 13:14:35.591140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.420 [2024-07-25 13:14:35.591151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.420 [2024-07-25 13:14:35.591165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.420 [2024-07-25 13:14:35.591178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:43.420 [2024-07-25 13:14:35.591188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.420 [2024-07-25 13:14:35.591207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:43.420 [2024-07-25 13:14:35.591217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.420 [2024-07-25 13:14:35.591228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6f790 is same with the state(5) to be set 00:18:43.420 [2024-07-25 13:14:35.591241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:43.420 [2024-07-25 13:14:35.591249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:43.420 [2024-07-25 13:14:35.591258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60568 len:8 PRP1 0x0 PRP2 0x0 00:18:43.420 [2024-07-25 13:14:35.591267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:43.420 [2024-07-25 13:14:35.591321] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c6f790 was disconnected and freed. reset controller. 00:18:43.420 [2024-07-25 13:14:35.591543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:43.420 [2024-07-25 13:14:35.591578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bffd40 (9): Bad file descriptor 00:18:43.420 [2024-07-25 13:14:35.591673] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:43.420 [2024-07-25 13:14:35.591694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bffd40 with addr=10.0.0.2, port=4420 00:18:43.420 [2024-07-25 13:14:35.591706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd40 is same with the state(5) to be set 00:18:43.420 [2024-07-25 13:14:35.591724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bffd40 (9): Bad file descriptor 00:18:43.420 [2024-07-25 13:14:35.591741] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:43.420 [2024-07-25 13:14:35.591751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:43.420 [2024-07-25 13:14:35.591762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:43.420 [2024-07-25 13:14:35.591781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:43.420 [2024-07-25 13:14:35.591793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:43.679 13:14:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:18:44.614 [2024-07-25 13:14:36.604387] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.614 [2024-07-25 13:14:36.604467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bffd40 with addr=10.0.0.2, port=4420 00:18:44.614 [2024-07-25 13:14:36.604485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd40 is same with the state(5) to be set 00:18:44.614 [2024-07-25 13:14:36.604515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bffd40 (9): Bad file descriptor 00:18:44.614 [2024-07-25 13:14:36.604535] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:44.614 [2024-07-25 13:14:36.604546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:44.614 [2024-07-25 13:14:36.604558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:44.614 [2024-07-25 13:14:36.604585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:44.614 [2024-07-25 13:14:36.604598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:45.549 [2024-07-25 13:14:37.604728] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:45.549 [2024-07-25 13:14:37.604801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bffd40 with addr=10.0.0.2, port=4420 00:18:45.549 [2024-07-25 13:14:37.604819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd40 is same with the state(5) to be set 00:18:45.549 [2024-07-25 13:14:37.604857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bffd40 (9): Bad file descriptor 00:18:45.549 [2024-07-25 13:14:37.604879] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:45.549 [2024-07-25 13:14:37.604890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:45.549 [2024-07-25 13:14:37.604902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:45.549 [2024-07-25 13:14:37.604941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:45.549 [2024-07-25 13:14:37.604955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:46.484 [2024-07-25 13:14:38.605419] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:46.484 [2024-07-25 13:14:38.605499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bffd40 with addr=10.0.0.2, port=4420 00:18:46.484 [2024-07-25 13:14:38.605517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffd40 is same with the state(5) to be set 00:18:46.484 [2024-07-25 13:14:38.605772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bffd40 (9): Bad file descriptor 00:18:46.484 [2024-07-25 13:14:38.606033] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:46.484 [2024-07-25 13:14:38.606056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:46.484 [2024-07-25 13:14:38.606068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:46.484 13:14:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:46.484 [2024-07-25 13:14:38.609897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:46.484 [2024-07-25 13:14:38.609936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:46.742 [2024-07-25 13:14:38.869668] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:46.742 13:14:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 81349 00:18:47.677 [2024-07-25 13:14:39.649135] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:53.013 00:18:53.013 Latency(us) 00:18:53.013 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.013 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:53.013 Verification LBA range: start 0x0 length 0x4000 00:18:53.013 NVMe0n1 : 10.01 5364.84 20.96 3744.83 0.00 14021.00 685.15 3019898.88 00:18:53.013 =================================================================================================================== 00:18:53.013 Total : 5364.84 20.96 3744.83 0.00 14021.00 0.00 3019898.88 00:18:53.013 0 00:18:53.013 13:14:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 81221 00:18:53.013 13:14:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 81221 ']' 00:18:53.013 13:14:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 81221 00:18:53.013 13:14:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:18:53.013 13:14:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:53.013 13:14:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81221 00:18:53.013 killing process with pid 81221 00:18:53.013 Received shutdown signal, test time was about 10.000000 seconds 00:18:53.013 00:18:53.013 Latency(us) 00:18:53.013 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.013 =================================================================================================================== 00:18:53.013 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:53.013 13:14:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:53.013 13:14:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:53.013 13:14:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81221' 00:18:53.013 13:14:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 81221 00:18:53.013 13:14:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 81221 00:18:53.013 13:14:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:18:53.013 13:14:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=81462 00:18:53.013 13:14:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 81462 /var/tmp/bdevperf.sock 00:18:53.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:53.013 13:14:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 81462 ']' 00:18:53.013 13:14:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:53.013 13:14:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:53.013 13:14:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:53.013 13:14:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:53.013 13:14:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:53.014 [2024-07-25 13:14:44.836967] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:53.014 [2024-07-25 13:14:44.837063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81462 ] 00:18:53.014 [2024-07-25 13:14:44.974446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.014 [2024-07-25 13:14:45.083280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:53.014 [2024-07-25 13:14:45.135656] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:53.951 13:14:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:53.951 13:14:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:18:53.951 13:14:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=81474 00:18:53.951 13:14:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81462 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:18:53.951 13:14:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:18:53.951 13:14:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:54.210 NVMe0n1 00:18:54.210 13:14:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=81521 00:18:54.210 13:14:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:54.210 13:14:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:18:54.469 Running I/O for 10 seconds... 00:18:55.407 13:14:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:55.407 [2024-07-25 13:14:47.595001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.407 [2024-07-25 13:14:47.595055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.407 [2024-07-25 13:14:47.595079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.407 [2024-07-25 13:14:47.595090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.407 [2024-07-25 13:14:47.595103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.407 [2024-07-25 13:14:47.595112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.407 [2024-07-25 13:14:47.595124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.407 [2024-07-25 13:14:47.595135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.407 [2024-07-25 13:14:47.595146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:114640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.407 [2024-07-25 13:14:47.595156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.407 [2024-07-25 13:14:47.595167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.407 [2024-07-25 13:14:47.595176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.407 [2024-07-25 13:14:47.595188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.407 [2024-07-25 13:14:47.595197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.407 [2024-07-25 13:14:47.595208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:48360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.407 [2024-07-25 13:14:47.595217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.407 [2024-07-25 13:14:47.595228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.407 [2024-07-25 13:14:47.595237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.407 [2024-07-25 13:14:47.595248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:55536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.407 [2024-07-25 13:14:47.595257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.407 [2024-07-25 13:14:47.595269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.407 [2024-07-25 13:14:47.595278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.407 [2024-07-25 13:14:47.595289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.407 [2024-07-25 13:14:47.595298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.595309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:89816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.595328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.595350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:118800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.595359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.595371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.595380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.595391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.595400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.595412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.595421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.595435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:130560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.595445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.595456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.595465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.595477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.595486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.595497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.595507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.595518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:116800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.595527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.595538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:111312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.595547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.595558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.595567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.595588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:53640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.595597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.595608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.595617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.595628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.595637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.595648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.595657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.595668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.595678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.595690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.595699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.595711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.595720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.595731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.595740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.595751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.595760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.595773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.595790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.595801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.595811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.595822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:55488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.595841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.595853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:54664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.595863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.595874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.595889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.595900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.595909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.595920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.595929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.595941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.595950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.595961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.595969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.595981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.595990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.596001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.596010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.596021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:36864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.596030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.596042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:115360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.596051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.596062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.596071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.596082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:125048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.596091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.596102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.596111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.596123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:129960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.408 [2024-07-25 13:14:47.596133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.408 [2024-07-25 13:14:47.596144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.409 [2024-07-25 13:14:47.596153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.409 [2024-07-25 13:14:47.596165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.409 [2024-07-25 13:14:47.596174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.409 [2024-07-25 13:14:47.596185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.409 [2024-07-25 13:14:47.596195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.409 [2024-07-25 13:14:47.596206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.409 [2024-07-25 13:14:47.596215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.409 [2024-07-25 13:14:47.596226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.409 [2024-07-25 13:14:47.596235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.409 [2024-07-25 13:14:47.596247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.409 [2024-07-25 13:14:47.596256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.670 [2024-07-25 13:14:47.596267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.670 [2024-07-25 13:14:47.596276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.670 [2024-07-25 13:14:47.596287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.670 [2024-07-25 13:14:47.596296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.670 [2024-07-25 13:14:47.596308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:74288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.670 [2024-07-25 13:14:47.596317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.670 [2024-07-25 13:14:47.596328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.670 [2024-07-25 13:14:47.596337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.670 [2024-07-25 13:14:47.596348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.670 [2024-07-25 13:14:47.596357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.670 [2024-07-25 13:14:47.596368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.670 [2024-07-25 13:14:47.596377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.670 [2024-07-25 13:14:47.596388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.670 [2024-07-25 13:14:47.596397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.670 [2024-07-25 13:14:47.596408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.670 [2024-07-25 13:14:47.596417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.670 [2024-07-25 13:14:47.596431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.670 [2024-07-25 13:14:47.596441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.670 [2024-07-25 13:14:47.596452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.670 [2024-07-25 13:14:47.596462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.670 [2024-07-25 13:14:47.596475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:121472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.670 [2024-07-25 13:14:47.596484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.670 [2024-07-25 13:14:47.596504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:54984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.670 [2024-07-25 13:14:47.596513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.670 [2024-07-25 13:14:47.596524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:42008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.670 [2024-07-25 13:14:47.596533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.670 [2024-07-25 13:14:47.596544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:118864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.670 [2024-07-25 13:14:47.596553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.670 [2024-07-25 13:14:47.596564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:51600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.670 [2024-07-25 13:14:47.596574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.670 [2024-07-25 13:14:47.596585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.670 [2024-07-25 13:14:47.596594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.670 [2024-07-25 13:14:47.596605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.670 [2024-07-25 13:14:47.596614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.670 [2024-07-25 13:14:47.596625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:116552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.670 [2024-07-25 13:14:47.596635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.670 [2024-07-25 13:14:47.596646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.670 [2024-07-25 13:14:47.596655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.670 [2024-07-25 13:14:47.596666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:106264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.670 [2024-07-25 13:14:47.596675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.670 [2024-07-25 13:14:47.596686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:52232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.670 [2024-07-25 13:14:47.596696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.596707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:108880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.596716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.596727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.596736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.596747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:27816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.596756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.596767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.596776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.596802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:118368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.596813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.596825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.596842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.596855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.596864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.596875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:39720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.596884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.596896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.596906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.596917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:125384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.596926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.596937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:124552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.596947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.596958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.596967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.596978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:36176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.596987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.596999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.597008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.597019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.597028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.597039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:35992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.597048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.597059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.597068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.597079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:34456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.597088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.597099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.597108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.597120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.597129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.597149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.597159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.597170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.597179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.597191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:33384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.597200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.597211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.597220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.597231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:73552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.597240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.597251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:50536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.597260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.597271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:105664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.597280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.597291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.597300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.597312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:115480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.597321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.597332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:34792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.597341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.597352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:89952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.597361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.597372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:121568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.597381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.597392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:93200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.597401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.597411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.597421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.597432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:53624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.597441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.597452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.597462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.597477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:106040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.597487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.597498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:67384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.597507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.671 [2024-07-25 13:14:47.597519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.671 [2024-07-25 13:14:47.597528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.672 [2024-07-25 13:14:47.597539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:124416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.672 [2024-07-25 13:14:47.597548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.672 [2024-07-25 13:14:47.597559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.672 [2024-07-25 13:14:47.597569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.672 [2024-07-25 13:14:47.597580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.672 [2024-07-25 13:14:47.597589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.672 [2024-07-25 13:14:47.597600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.672 [2024-07-25 13:14:47.597609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.672 [2024-07-25 13:14:47.597620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:102520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.672 [2024-07-25 13:14:47.597629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.672 [2024-07-25 13:14:47.597640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.672 [2024-07-25 13:14:47.597650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.672 [2024-07-25 13:14:47.597661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.672 [2024-07-25 13:14:47.597675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.672 [2024-07-25 13:14:47.597686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.672 [2024-07-25 13:14:47.597695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.672 [2024-07-25 13:14:47.597706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.672 [2024-07-25 13:14:47.597715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.672 [2024-07-25 13:14:47.597726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:26240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.672 [2024-07-25 13:14:47.597735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.672 [2024-07-25 13:14:47.597746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:37048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:55.672 [2024-07-25 13:14:47.597756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.672 [2024-07-25 13:14:47.597766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f66a0 is same with the state(5) to be set 00:18:55.672 [2024-07-25 13:14:47.597778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:55.672 [2024-07-25 13:14:47.597787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:55.672 [2024-07-25 13:14:47.597795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73368 len:8 PRP1 0x0 PRP2 0x0 00:18:55.672 [2024-07-25 13:14:47.597809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.672 [2024-07-25 13:14:47.597873] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24f66a0 was disconnected and freed. reset controller. 00:18:55.672 [2024-07-25 13:14:47.598136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:55.672 [2024-07-25 13:14:47.598212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24a5c00 (9): Bad file descriptor 00:18:55.672 [2024-07-25 13:14:47.598342] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:55.672 [2024-07-25 13:14:47.598371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24a5c00 with addr=10.0.0.2, port=4420 00:18:55.672 [2024-07-25 13:14:47.598383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5c00 is same with the state(5) to be set 00:18:55.672 [2024-07-25 13:14:47.598402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24a5c00 (9): Bad file descriptor 00:18:55.672 [2024-07-25 13:14:47.598419] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:55.672 [2024-07-25 13:14:47.598428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:55.672 [2024-07-25 13:14:47.598439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:55.672 [2024-07-25 13:14:47.598459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:55.672 [2024-07-25 13:14:47.598470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:55.672 13:14:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 81521 00:18:57.576 [2024-07-25 13:14:49.598692] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:57.576 [2024-07-25 13:14:49.598749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24a5c00 with addr=10.0.0.2, port=4420 00:18:57.576 [2024-07-25 13:14:49.598766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5c00 is same with the state(5) to be set 00:18:57.576 [2024-07-25 13:14:49.598791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24a5c00 (9): Bad file descriptor 00:18:57.576 [2024-07-25 13:14:49.598810] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:57.576 [2024-07-25 13:14:49.598820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:57.576 [2024-07-25 13:14:49.598844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:57.576 [2024-07-25 13:14:49.598873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:57.576 [2024-07-25 13:14:49.598885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:59.483 [2024-07-25 13:14:51.599154] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:59.483 [2024-07-25 13:14:51.599230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24a5c00 with addr=10.0.0.2, port=4420 00:18:59.483 [2024-07-25 13:14:51.599247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5c00 is same with the state(5) to be set 00:18:59.483 [2024-07-25 13:14:51.599274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24a5c00 (9): Bad file descriptor 00:18:59.483 [2024-07-25 13:14:51.599294] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:59.483 [2024-07-25 13:14:51.599304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:59.483 [2024-07-25 13:14:51.599315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:59.483 [2024-07-25 13:14:51.599343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:59.483 [2024-07-25 13:14:51.599355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:02.017 [2024-07-25 13:14:53.599483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:02.017 [2024-07-25 13:14:53.599552] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:02.017 [2024-07-25 13:14:53.599566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:02.017 [2024-07-25 13:14:53.599577] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:02.017 [2024-07-25 13:14:53.599605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:02.633 00:19:02.633 Latency(us) 00:19:02.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.633 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:19:02.633 NVMe0n1 : 8.11 2036.20 7.95 15.79 0.00 62328.38 8162.21 7015926.69 00:19:02.633 =================================================================================================================== 00:19:02.633 Total : 2036.20 7.95 15.79 0.00 62328.38 8162.21 7015926.69 00:19:02.633 0 00:19:02.633 13:14:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:02.633 Attaching 5 probes... 00:19:02.633 1258.687855: reset bdev controller NVMe0 00:19:02.633 1258.826262: reconnect bdev controller NVMe0 00:19:02.633 3259.121383: reconnect delay bdev controller NVMe0 00:19:02.633 3259.141566: reconnect bdev controller NVMe0 00:19:02.633 5259.572739: reconnect delay bdev controller NVMe0 00:19:02.633 5259.599084: reconnect bdev controller NVMe0 00:19:02.633 7260.014270: reconnect delay bdev controller NVMe0 00:19:02.633 7260.041924: reconnect bdev controller NVMe0 00:19:02.633 13:14:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:19:02.633 13:14:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:19:02.633 13:14:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 81474 00:19:02.633 13:14:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:02.633 13:14:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 81462 00:19:02.633 13:14:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 81462 ']' 00:19:02.633 13:14:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 81462 00:19:02.633 13:14:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:19:02.633 13:14:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:02.633 13:14:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81462 00:19:02.633 killing process with pid 81462 00:19:02.633 Received shutdown signal, test time was about 8.167232 seconds 00:19:02.633 00:19:02.633 Latency(us) 00:19:02.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.633 =================================================================================================================== 00:19:02.633 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:02.633 13:14:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:02.633 13:14:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:02.633 13:14:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81462' 00:19:02.633 13:14:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 81462 00:19:02.633 13:14:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 81462 00:19:02.892 13:14:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:03.151 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:19:03.151 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:19:03.151 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:03.151 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:19:03.151 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:03.151 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:19:03.151 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:03.151 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:03.151 rmmod nvme_tcp 00:19:03.151 rmmod nvme_fabrics 00:19:03.151 rmmod nvme_keyring 00:19:03.151 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:03.151 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:19:03.151 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:19:03.151 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 81024 ']' 00:19:03.151 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 81024 00:19:03.151 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 81024 ']' 00:19:03.151 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 81024 00:19:03.151 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:19:03.151 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:03.151 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81024 00:19:03.151 killing process with pid 81024 00:19:03.151 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:03.151 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:03.151 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81024' 00:19:03.151 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 81024 00:19:03.151 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 81024 00:19:03.410 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:03.410 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:03.410 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:03.410 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:03.410 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:03.410 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.410 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:03.410 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.410 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:03.410 00:19:03.410 real 0m47.250s 00:19:03.410 user 2m18.712s 00:19:03.410 sys 0m5.796s 00:19:03.410 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:03.410 13:14:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:03.410 ************************************ 00:19:03.410 END TEST nvmf_timeout 00:19:03.410 ************************************ 00:19:03.670 13:14:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:19:03.670 13:14:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:03.670 00:19:03.670 real 5m0.666s 00:19:03.670 user 13m12.766s 00:19:03.670 sys 1m6.877s 00:19:03.670 13:14:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:03.670 13:14:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.670 ************************************ 00:19:03.670 END TEST nvmf_host 00:19:03.670 ************************************ 00:19:03.670 ************************************ 00:19:03.670 END TEST nvmf_tcp 00:19:03.670 ************************************ 00:19:03.670 00:19:03.670 real 11m56.173s 00:19:03.670 user 29m6.254s 00:19:03.670 sys 3m0.121s 00:19:03.670 13:14:55 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:03.670 13:14:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:03.670 13:14:55 -- spdk/autotest.sh@292 -- # [[ 1 -eq 0 ]] 00:19:03.670 13:14:55 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:03.670 13:14:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:03.670 13:14:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:03.670 13:14:55 -- common/autotest_common.sh@10 -- # set +x 00:19:03.670 ************************************ 00:19:03.670 START TEST nvmf_dif 00:19:03.670 ************************************ 00:19:03.670 13:14:55 nvmf_dif -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:03.670 * Looking for test storage... 00:19:03.670 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:03.670 13:14:55 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:03.670 13:14:55 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:19:03.670 13:14:55 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:03.670 13:14:55 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:03.670 13:14:55 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:03.670 13:14:55 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:03.670 13:14:55 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:03.670 13:14:55 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:03.670 13:14:55 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:03.670 13:14:55 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:03.670 13:14:55 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:03.670 13:14:55 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:03.670 13:14:55 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:19:03.670 13:14:55 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=0dc0e430-4c84-41eb-b0bf-db443f090fb1 00:19:03.670 13:14:55 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:03.670 13:14:55 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:03.670 13:14:55 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:03.670 13:14:55 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:03.670 13:14:55 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:03.670 13:14:55 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:03.670 13:14:55 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:03.670 13:14:55 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:03.670 13:14:55 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.670 13:14:55 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.671 13:14:55 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.671 13:14:55 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:19:03.671 13:14:55 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:03.671 13:14:55 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:19:03.671 13:14:55 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:19:03.671 13:14:55 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:19:03.671 13:14:55 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:19:03.671 13:14:55 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.671 13:14:55 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:03.671 13:14:55 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:03.671 Cannot find device "nvmf_tgt_br" 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@155 -- # true 00:19:03.671 13:14:55 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:03.930 Cannot find device "nvmf_tgt_br2" 00:19:03.930 13:14:55 nvmf_dif -- nvmf/common.sh@156 -- # true 00:19:03.930 13:14:55 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:03.930 13:14:55 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:03.930 Cannot find device "nvmf_tgt_br" 00:19:03.930 13:14:55 nvmf_dif -- nvmf/common.sh@158 -- # true 00:19:03.930 13:14:55 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:03.930 Cannot find device "nvmf_tgt_br2" 00:19:03.930 13:14:55 nvmf_dif -- nvmf/common.sh@159 -- # true 00:19:03.930 13:14:55 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:03.930 13:14:55 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:03.930 13:14:55 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:03.930 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:03.930 13:14:55 nvmf_dif -- nvmf/common.sh@162 -- # true 00:19:03.930 13:14:55 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:03.930 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:03.930 13:14:55 nvmf_dif -- nvmf/common.sh@163 -- # true 00:19:03.930 13:14:55 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:03.930 13:14:55 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:03.930 13:14:55 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:03.930 13:14:55 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:03.930 13:14:56 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:03.930 13:14:56 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:03.930 13:14:56 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:03.930 13:14:56 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:03.930 13:14:56 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:03.930 13:14:56 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:03.930 13:14:56 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:03.930 13:14:56 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:03.930 13:14:56 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:03.930 13:14:56 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:03.930 13:14:56 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:03.930 13:14:56 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:03.930 13:14:56 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:03.930 13:14:56 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:04.190 13:14:56 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:04.190 13:14:56 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:04.190 13:14:56 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:04.190 13:14:56 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:04.190 13:14:56 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:04.190 13:14:56 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:04.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:04.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:19:04.190 00:19:04.190 --- 10.0.0.2 ping statistics --- 00:19:04.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.190 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:19:04.190 13:14:56 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:04.190 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:04.190 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:19:04.190 00:19:04.190 --- 10.0.0.3 ping statistics --- 00:19:04.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.190 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:19:04.190 13:14:56 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:04.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:04.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:19:04.190 00:19:04.190 --- 10.0.0.1 ping statistics --- 00:19:04.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.190 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:19:04.190 13:14:56 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:04.190 13:14:56 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:19:04.190 13:14:56 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:19:04.190 13:14:56 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:04.449 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:04.449 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:04.449 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:04.449 13:14:56 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:04.449 13:14:56 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:04.449 13:14:56 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:04.449 13:14:56 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:04.449 13:14:56 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:04.449 13:14:56 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:04.449 13:14:56 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:19:04.449 13:14:56 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:19:04.449 13:14:56 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:04.449 13:14:56 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:04.449 13:14:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:04.449 13:14:56 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=81953 00:19:04.449 13:14:56 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:04.449 13:14:56 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 81953 00:19:04.449 13:14:56 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 81953 ']' 00:19:04.449 13:14:56 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.449 13:14:56 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:04.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.449 13:14:56 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.449 13:14:56 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:04.449 13:14:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:04.708 [2024-07-25 13:14:56.641056] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:04.708 [2024-07-25 13:14:56.641818] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.708 [2024-07-25 13:14:56.778674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.708 [2024-07-25 13:14:56.892202] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:04.708 [2024-07-25 13:14:56.892259] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:04.708 [2024-07-25 13:14:56.892272] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:04.708 [2024-07-25 13:14:56.892281] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:04.708 [2024-07-25 13:14:56.892288] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:04.708 [2024-07-25 13:14:56.892322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.967 [2024-07-25 13:14:56.944850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:05.533 13:14:57 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:05.533 13:14:57 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:19:05.533 13:14:57 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:05.533 13:14:57 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:05.533 13:14:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:05.533 13:14:57 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:05.533 13:14:57 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:19:05.533 13:14:57 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:19:05.533 13:14:57 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.533 13:14:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:05.533 [2024-07-25 13:14:57.692434] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:05.533 13:14:57 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.533 13:14:57 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:19:05.533 13:14:57 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:05.533 13:14:57 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:05.533 13:14:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:05.533 ************************************ 00:19:05.533 START TEST fio_dif_1_default 00:19:05.533 ************************************ 00:19:05.533 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:19:05.533 13:14:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:19:05.533 13:14:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:19:05.533 13:14:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:19:05.533 13:14:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:19:05.533 13:14:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:19:05.533 13:14:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:05.533 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.533 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:05.533 bdev_null0 00:19:05.533 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.533 13:14:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:05.533 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.533 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:05.792 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.792 13:14:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:05.792 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.792 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:05.792 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.792 13:14:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:05.792 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.792 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:05.792 [2024-07-25 13:14:57.740525] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:05.792 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.792 13:14:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:05.793 { 00:19:05.793 "params": { 00:19:05.793 "name": "Nvme$subsystem", 00:19:05.793 "trtype": "$TEST_TRANSPORT", 00:19:05.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:05.793 "adrfam": "ipv4", 00:19:05.793 "trsvcid": "$NVMF_PORT", 00:19:05.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:05.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:05.793 "hdgst": ${hdgst:-false}, 00:19:05.793 "ddgst": ${ddgst:-false} 00:19:05.793 }, 00:19:05.793 "method": "bdev_nvme_attach_controller" 00:19:05.793 } 00:19:05.793 EOF 00:19:05.793 )") 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:05.793 "params": { 00:19:05.793 "name": "Nvme0", 00:19:05.793 "trtype": "tcp", 00:19:05.793 "traddr": "10.0.0.2", 00:19:05.793 "adrfam": "ipv4", 00:19:05.793 "trsvcid": "4420", 00:19:05.793 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:05.793 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:05.793 "hdgst": false, 00:19:05.793 "ddgst": false 00:19:05.793 }, 00:19:05.793 "method": "bdev_nvme_attach_controller" 00:19:05.793 }' 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:05.793 13:14:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:05.793 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:05.793 fio-3.35 00:19:05.793 Starting 1 thread 00:19:18.014 00:19:18.014 filename0: (groupid=0, jobs=1): err= 0: pid=82025: Thu Jul 25 13:15:08 2024 00:19:18.014 read: IOPS=8777, BW=34.3MiB/s (36.0MB/s)(343MiB/10001msec) 00:19:18.014 slat (usec): min=6, max=100, avg= 8.55, stdev= 2.63 00:19:18.014 clat (usec): min=343, max=4103, avg=430.64, stdev=36.17 00:19:18.014 lat (usec): min=349, max=4145, avg=439.18, stdev=36.72 00:19:18.014 clat percentiles (usec): 00:19:18.014 | 1.00th=[ 367], 5.00th=[ 388], 10.00th=[ 404], 20.00th=[ 412], 00:19:18.014 | 30.00th=[ 420], 40.00th=[ 424], 50.00th=[ 429], 60.00th=[ 437], 00:19:18.014 | 70.00th=[ 441], 80.00th=[ 449], 90.00th=[ 461], 95.00th=[ 474], 00:19:18.014 | 99.00th=[ 502], 99.50th=[ 519], 99.90th=[ 562], 99.95th=[ 578], 00:19:18.014 | 99.99th=[ 791] 00:19:18.014 bw ( KiB/s): min=33856, max=36544, per=100.00%, avg=35179.79, stdev=604.55, samples=19 00:19:18.014 iops : min= 8464, max= 9136, avg=8794.95, stdev=151.14, samples=19 00:19:18.014 lat (usec) : 500=98.78%, 750=1.21%, 1000=0.01% 00:19:18.014 lat (msec) : 2=0.01%, 10=0.01% 00:19:18.014 cpu : usr=85.25%, sys=12.92%, ctx=30, majf=0, minf=0 00:19:18.014 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:18.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:18.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:18.014 issued rwts: total=87788,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:18.014 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:18.014 00:19:18.014 Run status group 0 (all jobs): 00:19:18.014 READ: bw=34.3MiB/s (36.0MB/s), 34.3MiB/s-34.3MiB/s (36.0MB/s-36.0MB/s), io=343MiB (360MB), run=10001-10001msec 00:19:18.014 13:15:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:19:18.014 13:15:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:19:18.014 13:15:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:19:18.014 13:15:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:18.014 13:15:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:19:18.014 13:15:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:18.014 13:15:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.014 13:15:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:18.014 13:15:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.014 13:15:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:18.014 13:15:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.014 13:15:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:18.014 ************************************ 00:19:18.014 END TEST fio_dif_1_default 00:19:18.014 ************************************ 00:19:18.014 13:15:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.014 00:19:18.015 real 0m10.985s 00:19:18.015 user 0m9.171s 00:19:18.015 sys 0m1.544s 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:18.015 13:15:08 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:19:18.015 13:15:08 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:18.015 13:15:08 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:18.015 13:15:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:18.015 ************************************ 00:19:18.015 START TEST fio_dif_1_multi_subsystems 00:19:18.015 ************************************ 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:18.015 bdev_null0 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:18.015 [2024-07-25 13:15:08.773145] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:18.015 bdev_null1 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:18.015 { 00:19:18.015 "params": { 00:19:18.015 "name": "Nvme$subsystem", 00:19:18.015 "trtype": "$TEST_TRANSPORT", 00:19:18.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.015 "adrfam": "ipv4", 00:19:18.015 "trsvcid": "$NVMF_PORT", 00:19:18.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.015 "hdgst": ${hdgst:-false}, 00:19:18.015 "ddgst": ${ddgst:-false} 00:19:18.015 }, 00:19:18.015 "method": "bdev_nvme_attach_controller" 00:19:18.015 } 00:19:18.015 EOF 00:19:18.015 )") 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:18.015 { 00:19:18.015 "params": { 00:19:18.015 "name": "Nvme$subsystem", 00:19:18.015 "trtype": "$TEST_TRANSPORT", 00:19:18.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.015 "adrfam": "ipv4", 00:19:18.015 "trsvcid": "$NVMF_PORT", 00:19:18.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.015 "hdgst": ${hdgst:-false}, 00:19:18.015 "ddgst": ${ddgst:-false} 00:19:18.015 }, 00:19:18.015 "method": "bdev_nvme_attach_controller" 00:19:18.015 } 00:19:18.015 EOF 00:19:18.015 )") 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:19:18.015 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:18.015 "params": { 00:19:18.015 "name": "Nvme0", 00:19:18.015 "trtype": "tcp", 00:19:18.015 "traddr": "10.0.0.2", 00:19:18.015 "adrfam": "ipv4", 00:19:18.015 "trsvcid": "4420", 00:19:18.015 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:18.015 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:18.015 "hdgst": false, 00:19:18.015 "ddgst": false 00:19:18.015 }, 00:19:18.015 "method": "bdev_nvme_attach_controller" 00:19:18.015 },{ 00:19:18.015 "params": { 00:19:18.015 "name": "Nvme1", 00:19:18.015 "trtype": "tcp", 00:19:18.015 "traddr": "10.0.0.2", 00:19:18.015 "adrfam": "ipv4", 00:19:18.015 "trsvcid": "4420", 00:19:18.015 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:18.015 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:18.015 "hdgst": false, 00:19:18.016 "ddgst": false 00:19:18.016 }, 00:19:18.016 "method": "bdev_nvme_attach_controller" 00:19:18.016 }' 00:19:18.016 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:18.016 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:18.016 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:18.016 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:18.016 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:18.016 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:18.016 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:18.016 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:18.016 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:18.016 13:15:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:18.016 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:18.016 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:18.016 fio-3.35 00:19:18.016 Starting 2 threads 00:19:27.996 00:19:27.996 filename0: (groupid=0, jobs=1): err= 0: pid=82181: Thu Jul 25 13:15:19 2024 00:19:27.996 read: IOPS=4829, BW=18.9MiB/s (19.8MB/s)(189MiB/10001msec) 00:19:27.996 slat (nsec): min=5150, max=74391, avg=13133.33, stdev=3425.67 00:19:27.996 clat (usec): min=600, max=5851, avg=793.00, stdev=62.71 00:19:27.996 lat (usec): min=607, max=5882, avg=806.13, stdev=63.38 00:19:27.996 clat percentiles (usec): 00:19:27.996 | 1.00th=[ 685], 5.00th=[ 717], 10.00th=[ 734], 20.00th=[ 758], 00:19:27.996 | 30.00th=[ 775], 40.00th=[ 791], 50.00th=[ 799], 60.00th=[ 807], 00:19:27.996 | 70.00th=[ 816], 80.00th=[ 824], 90.00th=[ 840], 95.00th=[ 857], 00:19:27.996 | 99.00th=[ 889], 99.50th=[ 906], 99.90th=[ 938], 99.95th=[ 963], 00:19:27.996 | 99.99th=[ 1090] 00:19:27.996 bw ( KiB/s): min=19033, max=20288, per=50.04%, avg=19334.37, stdev=268.07, samples=19 00:19:27.996 iops : min= 4758, max= 5072, avg=4833.58, stdev=67.03, samples=19 00:19:27.996 lat (usec) : 750=15.89%, 1000=84.08% 00:19:27.996 lat (msec) : 2=0.01%, 10=0.01% 00:19:27.996 cpu : usr=90.51%, sys=8.18%, ctx=19, majf=0, minf=9 00:19:27.996 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:27.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.997 issued rwts: total=48300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.997 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:27.997 filename1: (groupid=0, jobs=1): err= 0: pid=82182: Thu Jul 25 13:15:19 2024 00:19:27.997 read: IOPS=4830, BW=18.9MiB/s (19.8MB/s)(189MiB/10001msec) 00:19:27.997 slat (nsec): min=6620, max=75028, avg=13368.33, stdev=3468.95 00:19:27.997 clat (usec): min=422, max=4596, avg=791.44, stdev=48.33 00:19:27.997 lat (usec): min=429, max=4621, avg=804.81, stdev=48.55 00:19:27.997 clat percentiles (usec): 00:19:27.997 | 1.00th=[ 701], 5.00th=[ 734], 10.00th=[ 750], 20.00th=[ 766], 00:19:27.997 | 30.00th=[ 775], 40.00th=[ 783], 50.00th=[ 791], 60.00th=[ 799], 00:19:27.997 | 70.00th=[ 807], 80.00th=[ 816], 90.00th=[ 832], 95.00th=[ 848], 00:19:27.997 | 99.00th=[ 881], 99.50th=[ 889], 99.90th=[ 922], 99.95th=[ 947], 00:19:27.997 | 99.99th=[ 1090] 00:19:27.997 bw ( KiB/s): min=19033, max=20288, per=50.04%, avg=19336.37, stdev=266.14, samples=19 00:19:27.997 iops : min= 4758, max= 5072, avg=4834.05, stdev=66.58, samples=19 00:19:27.997 lat (usec) : 500=0.02%, 750=8.89%, 1000=91.06% 00:19:27.997 lat (msec) : 2=0.01%, 10=0.01% 00:19:27.997 cpu : usr=90.54%, sys=8.16%, ctx=15, majf=0, minf=0 00:19:27.997 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:27.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.997 issued rwts: total=48312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.997 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:27.997 00:19:27.997 Run status group 0 (all jobs): 00:19:27.997 READ: bw=37.7MiB/s (39.6MB/s), 18.9MiB/s-18.9MiB/s (19.8MB/s-19.8MB/s), io=377MiB (396MB), run=10001-10001msec 00:19:27.997 13:15:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:19:27.997 13:15:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:19:27.997 13:15:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:19:27.997 13:15:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:27.997 13:15:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:19:27.997 13:15:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:27.997 13:15:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.997 13:15:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:27.997 13:15:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.997 13:15:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:27.997 13:15:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.997 13:15:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:27.997 13:15:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.997 13:15:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:19:27.997 13:15:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:27.997 13:15:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:19:27.997 13:15:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:27.997 13:15:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.997 13:15:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:27.997 13:15:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.997 13:15:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:27.997 13:15:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.997 13:15:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:27.997 ************************************ 00:19:27.997 END TEST fio_dif_1_multi_subsystems 00:19:27.997 ************************************ 00:19:27.997 13:15:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.997 00:19:27.997 real 0m11.129s 00:19:27.997 user 0m18.866s 00:19:27.997 sys 0m1.927s 00:19:27.997 13:15:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:27.997 13:15:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:27.997 13:15:19 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:19:27.997 13:15:19 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:27.997 13:15:19 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:27.997 13:15:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:27.997 ************************************ 00:19:27.997 START TEST fio_dif_rand_params 00:19:27.997 ************************************ 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:27.997 bdev_null0 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:27.997 [2024-07-25 13:15:19.956537] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.997 13:15:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.997 { 00:19:27.997 "params": { 00:19:27.997 "name": "Nvme$subsystem", 00:19:27.997 "trtype": "$TEST_TRANSPORT", 00:19:27.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.997 "adrfam": "ipv4", 00:19:27.997 "trsvcid": "$NVMF_PORT", 00:19:27.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.997 "hdgst": ${hdgst:-false}, 00:19:27.997 "ddgst": ${ddgst:-false} 00:19:27.997 }, 00:19:27.998 "method": "bdev_nvme_attach_controller" 00:19:27.998 } 00:19:27.998 EOF 00:19:27.998 )") 00:19:27.998 13:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:27.998 13:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:19:27.998 13:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:19:27.998 13:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:27.998 13:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:19:27.998 13:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:27.998 13:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:27.998 13:15:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:19:27.998 13:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:27.998 13:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:27.998 13:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:19:27.998 13:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:27.998 13:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:27.998 13:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:19:27.998 13:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:27.998 13:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:27.998 13:15:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:27.998 13:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:19:27.998 13:15:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:19:27.998 13:15:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:19:27.998 13:15:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:27.998 "params": { 00:19:27.998 "name": "Nvme0", 00:19:27.998 "trtype": "tcp", 00:19:27.998 "traddr": "10.0.0.2", 00:19:27.998 "adrfam": "ipv4", 00:19:27.998 "trsvcid": "4420", 00:19:27.998 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:27.998 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:27.998 "hdgst": false, 00:19:27.998 "ddgst": false 00:19:27.998 }, 00:19:27.998 "method": "bdev_nvme_attach_controller" 00:19:27.998 }' 00:19:27.998 13:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:27.998 13:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:27.998 13:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:27.998 13:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:27.998 13:15:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:27.998 13:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:27.998 13:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:27.998 13:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:27.998 13:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:27.998 13:15:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:27.998 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:27.998 ... 00:19:27.998 fio-3.35 00:19:27.998 Starting 3 threads 00:19:34.557 00:19:34.557 filename0: (groupid=0, jobs=1): err= 0: pid=82338: Thu Jul 25 13:15:25 2024 00:19:34.557 read: IOPS=260, BW=32.6MiB/s (34.1MB/s)(163MiB/5009msec) 00:19:34.557 slat (nsec): min=7749, max=47222, avg=12124.75, stdev=5999.30 00:19:34.557 clat (usec): min=11357, max=14348, avg=11482.99, stdev=227.46 00:19:34.557 lat (usec): min=11366, max=14365, avg=11495.11, stdev=227.73 00:19:34.557 clat percentiles (usec): 00:19:34.557 | 1.00th=[11338], 5.00th=[11338], 10.00th=[11338], 20.00th=[11469], 00:19:34.557 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11469], 00:19:34.557 | 70.00th=[11469], 80.00th=[11469], 90.00th=[11469], 95.00th=[11600], 00:19:34.557 | 99.00th=[12125], 99.50th=[13042], 99.90th=[14353], 99.95th=[14353], 00:19:34.557 | 99.99th=[14353] 00:19:34.557 bw ( KiB/s): min=33024, max=33792, per=33.32%, avg=33331.20, stdev=396.59, samples=10 00:19:34.557 iops : min= 258, max= 264, avg=260.40, stdev= 3.10, samples=10 00:19:34.557 lat (msec) : 20=100.00% 00:19:34.558 cpu : usr=91.10%, sys=8.23%, ctx=113, majf=0, minf=9 00:19:34.558 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:34.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.558 issued rwts: total=1305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.558 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:34.558 filename0: (groupid=0, jobs=1): err= 0: pid=82339: Thu Jul 25 13:15:25 2024 00:19:34.558 read: IOPS=260, BW=32.6MiB/s (34.1MB/s)(163MiB/5009msec) 00:19:34.558 slat (nsec): min=7933, max=30180, avg=11540.20, stdev=4211.22 00:19:34.558 clat (usec): min=9736, max=14385, avg=11483.69, stdev=247.48 00:19:34.558 lat (usec): min=9744, max=14403, avg=11495.23, stdev=247.66 00:19:34.558 clat percentiles (usec): 00:19:34.558 | 1.00th=[11338], 5.00th=[11338], 10.00th=[11338], 20.00th=[11469], 00:19:34.558 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11469], 00:19:34.558 | 70.00th=[11469], 80.00th=[11469], 90.00th=[11469], 95.00th=[11600], 00:19:34.558 | 99.00th=[12387], 99.50th=[13042], 99.90th=[14353], 99.95th=[14353], 00:19:34.558 | 99.99th=[14353] 00:19:34.558 bw ( KiB/s): min=33024, max=33792, per=33.32%, avg=33331.20, stdev=396.59, samples=10 00:19:34.558 iops : min= 258, max= 264, avg=260.40, stdev= 3.10, samples=10 00:19:34.558 lat (msec) : 10=0.23%, 20=99.77% 00:19:34.558 cpu : usr=90.69%, sys=8.61%, ctx=54, majf=0, minf=9 00:19:34.558 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:34.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.558 issued rwts: total=1305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.558 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:34.558 filename0: (groupid=0, jobs=1): err= 0: pid=82340: Thu Jul 25 13:15:25 2024 00:19:34.558 read: IOPS=260, BW=32.6MiB/s (34.2MB/s)(163MiB/5008msec) 00:19:34.558 slat (nsec): min=7825, max=32912, avg=10613.22, stdev=3522.83 00:19:34.558 clat (usec): min=7800, max=14092, avg=11485.30, stdev=301.56 00:19:34.558 lat (usec): min=7808, max=14106, avg=11495.91, stdev=301.84 00:19:34.558 clat percentiles (usec): 00:19:34.558 | 1.00th=[11338], 5.00th=[11338], 10.00th=[11469], 20.00th=[11469], 00:19:34.558 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11469], 00:19:34.558 | 70.00th=[11469], 80.00th=[11469], 90.00th=[11600], 95.00th=[11600], 00:19:34.558 | 99.00th=[12649], 99.50th=[13435], 99.90th=[14091], 99.95th=[14091], 00:19:34.558 | 99.99th=[14091] 00:19:34.558 bw ( KiB/s): min=33024, max=33792, per=33.32%, avg=33337.80, stdev=391.43, samples=10 00:19:34.558 iops : min= 258, max= 264, avg=260.40, stdev= 3.10, samples=10 00:19:34.558 lat (msec) : 10=0.23%, 20=99.77% 00:19:34.558 cpu : usr=92.15%, sys=7.25%, ctx=27, majf=0, minf=0 00:19:34.558 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:34.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.558 issued rwts: total=1305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.558 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:34.558 00:19:34.558 Run status group 0 (all jobs): 00:19:34.558 READ: bw=97.7MiB/s (102MB/s), 32.6MiB/s-32.6MiB/s (34.1MB/s-34.2MB/s), io=489MiB (513MB), run=5008-5009msec 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:34.558 bdev_null0 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:34.558 [2024-07-25 13:15:25.975800] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:34.558 bdev_null1 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.558 13:15:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:34.558 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:34.559 bdev_null2 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:34.559 { 00:19:34.559 "params": { 00:19:34.559 "name": "Nvme$subsystem", 00:19:34.559 "trtype": "$TEST_TRANSPORT", 00:19:34.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:34.559 "adrfam": "ipv4", 00:19:34.559 "trsvcid": "$NVMF_PORT", 00:19:34.559 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:34.559 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:34.559 "hdgst": ${hdgst:-false}, 00:19:34.559 "ddgst": ${ddgst:-false} 00:19:34.559 }, 00:19:34.559 "method": "bdev_nvme_attach_controller" 00:19:34.559 } 00:19:34.559 EOF 00:19:34.559 )") 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:34.559 { 00:19:34.559 "params": { 00:19:34.559 "name": "Nvme$subsystem", 00:19:34.559 "trtype": "$TEST_TRANSPORT", 00:19:34.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:34.559 "adrfam": "ipv4", 00:19:34.559 "trsvcid": "$NVMF_PORT", 00:19:34.559 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:34.559 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:34.559 "hdgst": ${hdgst:-false}, 00:19:34.559 "ddgst": ${ddgst:-false} 00:19:34.559 }, 00:19:34.559 "method": "bdev_nvme_attach_controller" 00:19:34.559 } 00:19:34.559 EOF 00:19:34.559 )") 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:34.559 { 00:19:34.559 "params": { 00:19:34.559 "name": "Nvme$subsystem", 00:19:34.559 "trtype": "$TEST_TRANSPORT", 00:19:34.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:34.559 "adrfam": "ipv4", 00:19:34.559 "trsvcid": "$NVMF_PORT", 00:19:34.559 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:34.559 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:34.559 "hdgst": ${hdgst:-false}, 00:19:34.559 "ddgst": ${ddgst:-false} 00:19:34.559 }, 00:19:34.559 "method": "bdev_nvme_attach_controller" 00:19:34.559 } 00:19:34.559 EOF 00:19:34.559 )") 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:19:34.559 13:15:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:34.559 "params": { 00:19:34.559 "name": "Nvme0", 00:19:34.559 "trtype": "tcp", 00:19:34.559 "traddr": "10.0.0.2", 00:19:34.559 "adrfam": "ipv4", 00:19:34.559 "trsvcid": "4420", 00:19:34.559 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:34.559 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:34.559 "hdgst": false, 00:19:34.559 "ddgst": false 00:19:34.559 }, 00:19:34.559 "method": "bdev_nvme_attach_controller" 00:19:34.559 },{ 00:19:34.559 "params": { 00:19:34.559 "name": "Nvme1", 00:19:34.559 "trtype": "tcp", 00:19:34.559 "traddr": "10.0.0.2", 00:19:34.559 "adrfam": "ipv4", 00:19:34.559 "trsvcid": "4420", 00:19:34.559 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.559 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:34.559 "hdgst": false, 00:19:34.559 "ddgst": false 00:19:34.559 }, 00:19:34.559 "method": "bdev_nvme_attach_controller" 00:19:34.559 },{ 00:19:34.559 "params": { 00:19:34.559 "name": "Nvme2", 00:19:34.559 "trtype": "tcp", 00:19:34.559 "traddr": "10.0.0.2", 00:19:34.559 "adrfam": "ipv4", 00:19:34.559 "trsvcid": "4420", 00:19:34.559 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:34.560 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:34.560 "hdgst": false, 00:19:34.560 "ddgst": false 00:19:34.560 }, 00:19:34.560 "method": "bdev_nvme_attach_controller" 00:19:34.560 }' 00:19:34.560 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:34.560 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:34.560 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:34.560 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:34.560 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:34.560 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:34.560 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:34.560 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:34.560 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:34.560 13:15:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:34.560 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:34.560 ... 00:19:34.560 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:34.560 ... 00:19:34.560 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:34.560 ... 00:19:34.560 fio-3.35 00:19:34.560 Starting 24 threads 00:19:46.762 fio: pid=82444, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:46.762 [2024-07-25 13:15:37.595945] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x2a5da40 via correct icresp 00:19:46.762 [2024-07-25 13:15:37.596036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2a5da40 00:19:46.762 fio: io_u error on file Nvme1n1: Input/output error: read offset=47452160, buflen=4096 00:19:46.762 fio: io_u error on file Nvme1n1: Input/output error: read offset=34287616, buflen=4096 00:19:46.762 fio: io_u error on file Nvme1n1: Input/output error: read offset=831488, buflen=4096 00:19:46.762 fio: io_u error on file Nvme1n1: Input/output error: read offset=53870592, buflen=4096 00:19:46.762 fio: io_u error on file Nvme1n1: Input/output error: read offset=37478400, buflen=4096 00:19:46.762 fio: io_u error on file Nvme1n1: Input/output error: read offset=2457600, buflen=4096 00:19:46.762 fio: io_u error on file Nvme1n1: Input/output error: read offset=40656896, buflen=4096 00:19:46.762 fio: io_u error on file Nvme1n1: Input/output error: read offset=11014144, buflen=4096 00:19:46.762 fio: io_u error on file Nvme1n1: Input/output error: read offset=48861184, buflen=4096 00:19:46.762 fio: io_u error on file Nvme1n1: Input/output error: read offset=17141760, buflen=4096 00:19:46.762 fio: io_u error on file Nvme1n1: Input/output error: read offset=55959552, buflen=4096 00:19:46.762 fio: io_u error on file Nvme1n1: Input/output error: read offset=27402240, buflen=4096 00:19:46.762 fio: io_u error on file Nvme1n1: Input/output error: read offset=38703104, buflen=4096 00:19:46.762 fio: io_u error on file Nvme1n1: Input/output error: read offset=35835904, buflen=4096 00:19:46.762 fio: io_u error on file Nvme1n1: Input/output error: read offset=1691648, buflen=4096 00:19:46.762 fio: io_u error on file Nvme1n1: Input/output error: read offset=61239296, buflen=4096 00:19:46.762 fio: pid=82443, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:46.762 [2024-07-25 13:15:37.624930] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x2a5cf00 via correct icresp 00:19:46.762 [2024-07-25 13:15:37.624977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2a5cf00 00:19:46.762 fio: io_u error on file Nvme1n1: Input/output error: read offset=45527040, buflen=4096 00:19:46.762 fio: io_u error on file Nvme1n1: Input/output error: read offset=31428608, buflen=4096 00:19:46.763 fio: io_u error on file Nvme1n1: Input/output error: read offset=38137856, buflen=4096 00:19:46.763 fio: io_u error on file Nvme1n1: Input/output error: read offset=48013312, buflen=4096 00:19:46.763 fio: io_u error on file Nvme1n1: Input/output error: read offset=19353600, buflen=4096 00:19:46.763 fio: io_u error on file Nvme1n1: Input/output error: read offset=60690432, buflen=4096 00:19:46.763 fio: io_u error on file Nvme1n1: Input/output error: read offset=5431296, buflen=4096 00:19:46.763 fio: io_u error on file Nvme1n1: Input/output error: read offset=40865792, buflen=4096 00:19:46.763 fio: io_u error on file Nvme1n1: Input/output error: read offset=28303360, buflen=4096 00:19:46.763 fio: io_u error on file Nvme1n1: Input/output error: read offset=54108160, buflen=4096 00:19:46.763 fio: io_u error on file Nvme1n1: Input/output error: read offset=45944832, buflen=4096 00:19:46.763 fio: io_u error on file Nvme1n1: Input/output error: read offset=36769792, buflen=4096 00:19:46.763 fio: io_u error on file Nvme1n1: Input/output error: read offset=19599360, buflen=4096 00:19:46.763 fio: io_u error on file Nvme1n1: Input/output error: read offset=9601024, buflen=4096 00:19:46.763 fio: io_u error on file Nvme1n1: Input/output error: read offset=6950912, buflen=4096 00:19:46.763 fio: io_u error on file Nvme1n1: Input/output error: read offset=58298368, buflen=4096 00:19:46.763 fio: pid=82448, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:46.763 [2024-07-25 13:15:37.677948] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x414e000 via correct icresp 00:19:46.763 [2024-07-25 13:15:37.677997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x414e000 00:19:46.763 fio: io_u error on file Nvme1n1: Input/output error: read offset=21385216, buflen=4096 00:19:46.763 fio: io_u error on file Nvme1n1: Input/output error: read offset=24121344, buflen=4096 00:19:46.763 fio: io_u error on file Nvme1n1: Input/output error: read offset=65490944, buflen=4096 00:19:46.763 fio: io_u error on file Nvme1n1: Input/output error: read offset=54857728, buflen=4096 00:19:46.763 fio: io_u error on file Nvme1n1: Input/output error: read offset=65241088, buflen=4096 00:19:46.763 fio: io_u error on file Nvme1n1: Input/output error: read offset=26177536, buflen=4096 00:19:46.763 fio: io_u error on file Nvme1n1: Input/output error: read offset=14835712, buflen=4096 00:19:46.763 fio: io_u error on file Nvme1n1: Input/output error: read offset=1720320, buflen=4096 00:19:46.763 fio: io_u error on file Nvme1n1: Input/output error: read offset=40833024, buflen=4096 00:19:46.763 fio: io_u error on file Nvme1n1: Input/output error: read offset=2764800, buflen=4096 00:19:46.763 fio: io_u error on file Nvme1n1: Input/output error: read offset=59269120, buflen=4096 00:19:46.763 fio: io_u error on file Nvme1n1: Input/output error: read offset=65015808, buflen=4096 00:19:46.763 fio: io_u error on file Nvme1n1: Input/output error: read offset=8192, buflen=4096 00:19:46.763 fio: io_u error on file Nvme1n1: Input/output error: read offset=37806080, buflen=4096 00:19:46.763 fio: io_u error on file Nvme1n1: Input/output error: read offset=15462400, buflen=4096 00:19:46.763 fio: io_u error on file Nvme1n1: Input/output error: read offset=17604608, buflen=4096 00:19:46.763 fio: pid=82451, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:46.763 [2024-07-25 13:15:37.681927] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x2a5d680 via correct icresp 00:19:46.763 [2024-07-25 13:15:37.681971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2a5d680 00:19:46.763 fio: io_u error on file Nvme2n1: Input/output error: read offset=56950784, buflen=4096 00:19:46.763 fio: io_u error on file Nvme2n1: Input/output error: read offset=28827648, buflen=4096 00:19:46.763 fio: io_u error on file Nvme2n1: Input/output error: read offset=49283072, buflen=4096 00:19:46.763 fio: io_u error on file Nvme2n1: Input/output error: read offset=65605632, buflen=4096 00:19:46.763 fio: io_u error on file Nvme2n1: Input/output error: read offset=58163200, buflen=4096 00:19:46.763 fio: io_u error on file Nvme2n1: Input/output error: read offset=34717696, buflen=4096 00:19:46.763 fio: io_u error on file Nvme2n1: Input/output error: read offset=34066432, buflen=4096 00:19:46.763 fio: io_u error on file Nvme2n1: Input/output error: read offset=49029120, buflen=4096 00:19:46.763 fio: io_u error on file Nvme2n1: Input/output error: read offset=50032640, buflen=4096 00:19:46.763 fio: io_u error on file Nvme2n1: Input/output error: read offset=9048064, buflen=4096 00:19:46.763 fio: io_u error on file Nvme2n1: Input/output error: read offset=25878528, buflen=4096 00:19:46.763 fio: io_u error on file Nvme2n1: Input/output error: read offset=53391360, buflen=4096 00:19:46.763 fio: io_u error on file Nvme2n1: Input/output error: read offset=44511232, buflen=4096 00:19:46.763 fio: io_u error on file Nvme2n1: Input/output error: read offset=14729216, buflen=4096 00:19:46.763 fio: io_u error on file Nvme2n1: Input/output error: read offset=33435648, buflen=4096 00:19:46.763 fio: io_u error on file Nvme2n1: Input/output error: read offset=42016768, buflen=4096 00:19:46.763 [2024-07-25 13:15:37.710903] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x414e960 via correct icresp 00:19:46.763 [2024-07-25 13:15:37.711204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x414e960 00:19:46.763 fio: io_u error on file Nvme2n1: Input/output error: read offset=38592512, buflen=4096 00:19:46.763 fio: io_u error on file Nvme2n1: Input/output error: read offset=49111040, buflen=4096 00:19:46.763 fio: io_u error on file Nvme2n1: Input/output error: read offset=16941056, buflen=4096 00:19:46.763 fio: io_u error on file Nvme2n1: Input/output error: read offset=34516992, buflen=4096 00:19:46.763 fio: io_u error on file Nvme2n1: Input/output error: read offset=16191488, buflen=4096 00:19:46.763 fio: io_u error on file Nvme2n1: Input/output error: read offset=10518528, buflen=4096 00:19:46.763 fio: io_u error on file Nvme2n1: Input/output error: read offset=50724864, buflen=4096 00:19:46.763 fio: io_u error on file Nvme2n1: Input/output error: read offset=65597440, buflen=4096 00:19:46.763 fio: io_u error on file Nvme2n1: Input/output error: read offset=49045504, buflen=4096 00:19:46.763 fio: io_u error on file Nvme2n1: Input/output error: read offset=24788992, buflen=4096 00:19:46.763 fio: io_u error on file Nvme2n1: Input/output error: read offset=33144832, buflen=4096 00:19:46.763 fio: io_u error on file Nvme2n1: Input/output error: read offset=9060352, buflen=4096 00:19:46.763 fio: pid=82458, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:46.763 fio: io_u error on file Nvme2n1: Input/output error: read offset=38088704, buflen=4096 00:19:46.763 fio: io_u error on file Nvme2n1: Input/output error: read offset=60559360, buflen=4096 00:19:46.763 fio: io_u error on file Nvme2n1: Input/output error: read offset=40730624, buflen=4096 00:19:46.763 fio: io_u error on file Nvme2n1: Input/output error: read offset=9224192, buflen=4096 00:19:46.763 [2024-07-25 13:15:37.717958] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x414e3c0 via correct icresp 00:19:46.763 [2024-07-25 13:15:37.717970] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x414e5a0 via correct icresp 00:19:46.763 [2024-07-25 13:15:37.718225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x414e3c0 00:19:46.763 [2024-07-25 13:15:37.718419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x414e5a0 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=59994112, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=65564672, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=58109952, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=44072960, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=34648064, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=40460288, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=65613824, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=62394368, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=12816384, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=16596992, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=966656, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=15175680, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=20307968, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=40996864, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=52510720, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=36761600, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=21958656, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=24530944, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=22044672, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=32223232, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=54571008, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=2121728, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=12918784, buflen=4096 00:19:46.763 fio: pid=82437, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=28672, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=18472960, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=56102912, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=45244416, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=37945344, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=63819776, buflen=4096 00:19:46.763 fio: pid=82435, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=25673728, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=864256, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=50180096, buflen=4096 00:19:46.763 [2024-07-25 13:15:37.727925] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x2a5dc20 via correct icresp 00:19:46.763 [2024-07-25 13:15:37.727967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2a5dc20 00:19:46.763 [2024-07-25 13:15:37.728183] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x414e1e0 via correct icresp 00:19:46.763 [2024-07-25 13:15:37.728200] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x414f0e0 via correct icresp 00:19:46.763 [2024-07-25 13:15:37.728231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x414e1e0 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=36966400, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=55111680, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=38477824, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=23859200, buflen=4096 00:19:46.763 fio: io_u error on file Nvme0n1: Input/output error: read offset=58073088, buflen=4096 00:19:46.764 fio: io_u error on file Nvme0n1: Input/output error: read offset=59396096, buflen=4096 00:19:46.764 fio: io_u error on file Nvme0n1: Input/output error: read offset=56569856, buflen=4096 00:19:46.764 fio: io_u error on file Nvme0n1: Input/output error: read offset=61960192, buflen=4096 00:19:46.764 fio: io_u error on file Nvme0n1: Input/output error: read offset=41091072, buflen=4096 00:19:46.764 fio: io_u error on file Nvme0n1: Input/output error: read offset=42139648, buflen=4096 00:19:46.764 fio: io_u error on file Nvme0n1: Input/output error: read offset=4927488, buflen=4096 00:19:46.764 [2024-07-25 13:15:37.728255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x414f0e0 00:19:46.764 fio: io_u error on file Nvme0n1: Input/output error: read offset=66584576, buflen=4096 00:19:46.764 fio: io_u error on file Nvme0n1: Input/output error: read offset=53096448, buflen=4096 00:19:46.764 fio: io_u error on file Nvme0n1: Input/output error: read offset=13066240, buflen=4096 00:19:46.764 [2024-07-25 13:15:37.728280] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x414e780 via correct icresp 00:19:46.764 [2024-07-25 13:15:37.728329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x414e780 00:19:46.764 [2024-07-25 13:15:37.728329] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x414f4a0 via correct icresp 00:19:46.764 [2024-07-25 13:15:37.728417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x414f4a0 00:19:46.764 fio: pid=82442, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:46.764 fio: io_u error on file Nvme0n1: Input/output error: read offset=53342208, buflen=4096 00:19:46.764 fio: io_u error on file Nvme0n1: Input/output error: read offset=39178240, buflen=4096 00:19:46.764 fio: io_u error on file Nvme0n1: Input/output error: read offset=23973888, buflen=4096 00:19:46.764 fio: io_u error on file Nvme0n1: Input/output error: read offset=38977536, buflen=4096 00:19:46.764 fio: io_u error on file Nvme0n1: Input/output error: read offset=12337152, buflen=4096 00:19:46.764 fio: io_u error on file Nvme0n1: Input/output error: read offset=4063232, buflen=4096 00:19:46.764 fio: io_u error on file Nvme0n1: Input/output error: read offset=63217664, buflen=4096 00:19:46.764 fio: io_u error on file Nvme0n1: Input/output error: read offset=3981312, buflen=4096 00:19:46.764 fio: io_u error on file Nvme0n1: Input/output error: read offset=36765696, buflen=4096 00:19:46.764 fio: io_u error on file Nvme0n1: Input/output error: read offset=58126336, buflen=4096 00:19:46.764 fio: io_u error on file Nvme0n1: Input/output error: read offset=41697280, buflen=4096 00:19:46.764 fio: io_u error on file Nvme0n1: Input/output error: read offset=16314368, buflen=4096 00:19:46.764 fio: io_u error on file Nvme0n1: Input/output error: read offset=13213696, buflen=4096 00:19:46.764 fio: io_u error on file Nvme0n1: Input/output error: read offset=12914688, buflen=4096 00:19:46.764 fio: io_u error on file Nvme0n1: Input/output error: read offset=30134272, buflen=4096 00:19:46.764 fio: io_u error on file Nvme0n1: Input/output error: read offset=38567936, buflen=4096 00:19:46.764 fio: pid=82457, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:46.764 fio: io_u error on file Nvme2n1: Input/output error: read offset=44494848, buflen=4096 00:19:46.764 fio: io_u error on file Nvme2n1: Input/output error: read offset=7815168, buflen=4096 00:19:46.764 fio: io_u error on file Nvme2n1: Input/output error: read offset=19873792, buflen=4096 00:19:46.764 fio: io_u error on file Nvme2n1: Input/output error: read offset=6111232, buflen=4096 00:19:46.764 fio: io_u error on file Nvme2n1: Input/output error: read offset=16973824, buflen=4096 00:19:46.764 fio: io_u error on file Nvme2n1: Input/output error: read offset=49205248, buflen=4096 00:19:46.764 fio: io_u error on file Nvme2n1: Input/output error: read offset=31784960, buflen=4096 00:19:46.764 fio: io_u error on file Nvme2n1: Input/output error: read offset=7819264, buflen=4096 00:19:46.764 fio: io_u error on file Nvme2n1: Input/output error: read offset=59662336, buflen=4096 00:19:46.764 fio: io_u error on file Nvme2n1: Input/output error: read offset=34951168, buflen=4096 00:19:46.764 fio: io_u error on file Nvme2n1: Input/output error: read offset=1232896, buflen=4096 00:19:46.764 fio: io_u error on file Nvme2n1: Input/output error: read offset=54398976, buflen=4096 00:19:46.764 fio: io_u error on file Nvme2n1: Input/output error: read offset=61435904, buflen=4096 00:19:46.764 fio: io_u error on file Nvme2n1: Input/output error: read offset=61767680, buflen=4096 00:19:46.764 fio: io_u error on file Nvme2n1: Input/output error: read offset=40538112, buflen=4096 00:19:46.764 fio: io_u error on file Nvme2n1: Input/output error: read offset=51154944, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=2277376, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=33816576, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=15503360, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=18022400, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=35905536, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=64307200, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=35549184, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=39657472, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=52142080, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=34988032, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=55037952, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=55644160, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=28934144, buflen=4096 00:19:46.764 fio: pid=82446, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:46.764 fio: pid=82449, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=47759360, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=5263360, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=64847872, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=54173696, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=22335488, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=62726144, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=58773504, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=737280, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=36368384, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=43036672, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=52785152, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=36683776, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=37040128, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=48943104, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=40386560, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=24539136, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=50425856, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=10510336, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=27250688, buflen=4096 00:19:46.764 [2024-07-25 13:15:37.729378] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x2a5de00 via correct icresp 00:19:46.764 fio: pid=82440, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:46.764 [2024-07-25 13:15:37.729412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2a5de00 00:19:46.764 fio: io_u error on file Nvme0n1: Input/output error: read offset=66953216, buflen=4096 00:19:46.764 fio: io_u error on file Nvme0n1: Input/output error: read offset=55177216, buflen=4096 00:19:46.764 fio: pid=82455, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:46.764 fio: io_u error on file Nvme2n1: Input/output error: read offset=64974848, buflen=4096 00:19:46.764 fio: io_u error on file Nvme2n1: Input/output error: read offset=51191808, buflen=4096 00:19:46.764 fio: io_u error on file Nvme2n1: Input/output error: read offset=12398592, buflen=4096 00:19:46.764 fio: io_u error on file Nvme2n1: Input/output error: read offset=61280256, buflen=4096 00:19:46.764 fio: io_u error on file Nvme2n1: Input/output error: read offset=48734208, buflen=4096 00:19:46.764 fio: io_u error on file Nvme2n1: Input/output error: read offset=11575296, buflen=4096 00:19:46.764 fio: io_u error on file Nvme2n1: Input/output error: read offset=52445184, buflen=4096 00:19:46.764 fio: io_u error on file Nvme2n1: Input/output error: read offset=60080128, buflen=4096 00:19:46.764 fio: io_u error on file Nvme2n1: Input/output error: read offset=44896256, buflen=4096 00:19:46.764 fio: io_u error on file Nvme2n1: Input/output error: read offset=53538816, buflen=4096 00:19:46.764 fio: io_u error on file Nvme2n1: Input/output error: read offset=62349312, buflen=4096 00:19:46.764 fio: io_u error on file Nvme2n1: Input/output error: read offset=37318656, buflen=4096 00:19:46.764 fio: io_u error on file Nvme2n1: Input/output error: read offset=2191360, buflen=4096 00:19:46.764 fio: io_u error on file Nvme2n1: Input/output error: read offset=40357888, buflen=4096 00:19:46.764 fio: io_u error on file Nvme2n1: Input/output error: read offset=4063232, buflen=4096 00:19:46.764 fio: io_u error on file Nvme2n1: Input/output error: read offset=26181632, buflen=4096 00:19:46.764 [2024-07-25 13:15:37.729796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a5c960 (9): Bad file descriptor 00:19:46.764 [2024-07-25 13:15:37.730035] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x414ef00 via correct icresp 00:19:46.764 [2024-07-25 13:15:37.730061] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x414f2c0 via correct icresp 00:19:46.764 [2024-07-25 13:15:37.730036] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x414eb40 via correct icresp 00:19:46.764 [2024-07-25 13:15:37.730112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x414ef00 00:19:46.764 [2024-07-25 13:15:37.730142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x414f2c0 00:19:46.764 [2024-07-25 13:15:37.730158] nvme_tcp.c:2414:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x414ed20 via correct icresp 00:19:46.764 [2024-07-25 13:15:37.730290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x414eb40 00:19:46.764 [2024-07-25 13:15:37.730317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x414ed20 00:19:46.764 fio: pid=82450, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=39387136, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=47931392, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=21307392, buflen=4096 00:19:46.764 fio: io_u error on file Nvme1n1: Input/output error: read offset=58040320, buflen=4096 00:19:46.765 fio: io_u error on file Nvme1n1: Input/output error: read offset=46010368, buflen=4096 00:19:46.765 fio: io_u error on file Nvme1n1: Input/output error: read offset=42749952, buflen=4096 00:19:46.765 fio: io_u error on file Nvme1n1: Input/output error: read offset=35491840, buflen=4096 00:19:46.765 fio: io_u error on file Nvme1n1: Input/output error: read offset=57552896, buflen=4096 00:19:46.765 fio: io_u error on file Nvme1n1: Input/output error: read offset=17387520, buflen=4096 00:19:46.765 fio: io_u error on file Nvme1n1: Input/output error: read offset=37646336, buflen=4096 00:19:46.765 fio: io_u error on file Nvme1n1: Input/output error: read offset=42897408, buflen=4096 00:19:46.765 fio: io_u error on file Nvme1n1: Input/output error: read offset=28008448, buflen=4096 00:19:46.765 fio: io_u error on file Nvme1n1: Input/output error: read offset=42741760, buflen=4096 00:19:46.765 fio: io_u error on file Nvme1n1: Input/output error: read offset=2555904, buflen=4096 00:19:46.765 fio: io_u error on file Nvme1n1: Input/output error: read offset=34897920, buflen=4096 00:19:46.765 fio: io_u error on file Nvme1n1: Input/output error: read offset=50384896, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=51261440, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=23396352, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=57282560, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=21434368, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=10530816, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=2691072, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=15687680, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=49655808, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=23162880, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=65122304, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=3477504, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=5808128, buflen=4096 00:19:46.765 fio: pid=82439, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=32575488, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=36069376, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=32911360, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=9801728, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=26419200, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=56233984, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=7028736, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=57393152, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=66609152, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=49180672, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=20025344, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=44130304, buflen=4096 00:19:46.765 fio: pid=82441, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=23453696, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=34766848, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=36364288, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=32235520, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=32780288, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=54685696, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=13651968, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=10285056, buflen=4096 00:19:46.765 fio: pid=82445, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:46.765 fio: io_u error on file Nvme1n1: Input/output error: read offset=33353728, buflen=4096 00:19:46.765 fio: io_u error on file Nvme1n1: Input/output error: read offset=62767104, buflen=4096 00:19:46.765 fio: io_u error on file Nvme1n1: Input/output error: read offset=9048064, buflen=4096 00:19:46.765 fio: io_u error on file Nvme1n1: Input/output error: read offset=34938880, buflen=4096 00:19:46.765 fio: io_u error on file Nvme1n1: Input/output error: read offset=31834112, buflen=4096 00:19:46.765 fio: io_u error on file Nvme1n1: Input/output error: read offset=52871168, buflen=4096 00:19:46.765 fio: io_u error on file Nvme1n1: Input/output error: read offset=46837760, buflen=4096 00:19:46.765 fio: io_u error on file Nvme1n1: Input/output error: read offset=10993664, buflen=4096 00:19:46.765 fio: io_u error on file Nvme1n1: Input/output error: read offset=1417216, buflen=4096 00:19:46.765 fio: io_u error on file Nvme1n1: Input/output error: read offset=45432832, buflen=4096 00:19:46.765 fio: io_u error on file Nvme1n1: Input/output error: read offset=51412992, buflen=4096 00:19:46.765 fio: io_u error on file Nvme1n1: Input/output error: read offset=63549440, buflen=4096 00:19:46.765 fio: io_u error on file Nvme1n1: Input/output error: read offset=26009600, buflen=4096 00:19:46.765 fio: io_u error on file Nvme1n1: Input/output error: read offset=61190144, buflen=4096 00:19:46.765 fio: io_u error on file Nvme1n1: Input/output error: read offset=32485376, buflen=4096 00:19:46.765 fio: io_u error on file Nvme1n1: Input/output error: read offset=5083136, buflen=4096 00:19:46.765 fio: pid=82436, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=65187840, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=48304128, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=55967744, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=53641216, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=33284096, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=25718784, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=13090816, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=46829568, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=33095680, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=50106368, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=29609984, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=47587328, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=62816256, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=1392640, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=1024000, buflen=4096 00:19:46.765 fio: io_u error on file Nvme0n1: Input/output error: read offset=65961984, buflen=4096 00:19:46.765 [2024-07-25 13:15:37.732663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a5c5a0 (9): Bad file descriptor 00:19:46.765 [2024-07-25 13:15:37.732856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a5c780 (9): Bad file descriptor 00:19:46.765 [2024-07-25 13:15:37.733310] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b61f00 is same with the state(5) to be set 00:19:46.765 [2024-07-25 13:15:37.733344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b61f00 is same with the state(5) to be set 00:19:46.765 [2024-07-25 13:15:37.733356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b61f00 is same with the state(5) to be set 00:19:46.765 [2024-07-25 13:15:37.733365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b61f00 is same with the state(5) to be set 00:19:46.765 [2024-07-25 13:15:37.733374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b61f00 is same with the state(5) to be set 00:19:46.765 [2024-07-25 13:15:37.733382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b61f00 is same with the state(5) to be set 00:19:46.765 [2024-07-25 13:15:37.733391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b61f00 is same with the state(5) to be set 00:19:46.765 [2024-07-25 13:15:37.733400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b61f00 is same with the state(5) to be set 00:19:46.765 [2024-07-25 13:15:37.733408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b61f00 is same with the state(5) to be set 00:19:46.765 [2024-07-25 13:15:37.733417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b61f00 is same with the state(5) to be set 00:19:46.765 [2024-07-25 13:15:37.733426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b61f00 is same with the state(5) to be set 00:19:46.765 [2024-07-25 13:15:37.733434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b61f00 is same with the state(5) to be set 00:19:46.765 [2024-07-25 13:15:37.733443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b61f00 is same with the state(5) to be set 00:19:46.765 [2024-07-25 13:15:37.733451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b61f00 is same with the state(5) to be set 00:19:46.765 [2024-07-25 13:15:37.733460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b61f00 is same with the state(5) to be set 00:19:46.765 00:19:46.765 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82435: Thu Jul 25 13:15:37 2024 00:19:46.765 cpu : usr=0.00%, sys=0.00%, ctx=29, majf=0, minf=0 00:19:46.765 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:19:46.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.765 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.765 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.765 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.765 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82436: Thu Jul 25 13:15:37 2024 00:19:46.765 read: IOPS=1730, BW=6896KiB/s (7062kB/s)(16.2MiB/2399msec) 00:19:46.765 slat (usec): min=4, max=4037, avg=12.85, stdev=124.70 00:19:46.765 clat (usec): min=387, max=82542, avg=9162.75, stdev=11691.46 00:19:46.765 lat (usec): min=395, max=82559, avg=9175.62, stdev=11690.48 00:19:46.765 clat percentiles (usec): 00:19:46.765 | 1.00th=[ 1598], 5.00th=[ 1647], 10.00th=[ 1680], 20.00th=[ 1713], 00:19:46.765 | 30.00th=[ 1762], 40.00th=[ 3359], 50.00th=[ 5473], 60.00th=[ 5735], 00:19:46.765 | 70.00th=[ 7308], 80.00th=[13566], 90.00th=[28705], 95.00th=[35914], 00:19:46.765 | 99.00th=[44827], 99.50th=[46924], 99.90th=[82314], 99.95th=[82314], 00:19:46.765 | 99.99th=[82314] 00:19:46.765 bw ( KiB/s): min= 1776, max=13921, per=25.16%, avg=5100.25, stdev=5892.27, samples=4 00:19:46.765 iops : min= 444, max= 3480, avg=1275.00, stdev=1472.94, samples=4 00:19:46.765 lat (usec) : 500=0.19%, 750=0.14% 00:19:46.765 lat (msec) : 2=36.20%, 4=4.31%, 10=37.24%, 20=5.85%, 50=15.25% 00:19:46.766 lat (msec) : 100=0.43% 00:19:46.766 cpu : usr=42.83%, sys=4.67%, ctx=232, majf=0, minf=9 00:19:46.766 IO depths : 1=1.5%, 2=7.3%, 4=23.5%, 8=56.4%, 16=11.3%, 32=0.0%, >=64=0.0% 00:19:46.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.766 complete : 0=0.1%, 4=94.0%, 8=0.6%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.766 issued rwts: total=4152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.766 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.766 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82437: Thu Jul 25 13:15:37 2024 00:19:46.766 cpu : usr=0.00%, sys=0.00%, ctx=25, majf=0, minf=0 00:19:46.766 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:19:46.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.766 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.766 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.766 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.766 filename0: (groupid=0, jobs=1): err= 0: pid=82438: Thu Jul 25 13:15:37 2024 00:19:46.766 read: IOPS=740, BW=2964KiB/s (3035kB/s)(28.9MiB/10002msec) 00:19:46.766 slat (usec): min=4, max=10038, avg=28.27, stdev=361.16 00:19:46.766 clat (usec): min=1313, max=50118, avg=21396.19, stdev=7585.26 00:19:46.766 lat (usec): min=1326, max=50132, avg=21424.46, stdev=7586.50 00:19:46.766 clat percentiles (usec): 00:19:46.766 | 1.00th=[ 7504], 5.00th=[11863], 10.00th=[12125], 20.00th=[13960], 00:19:46.766 | 30.00th=[15533], 40.00th=[21103], 50.00th=[22938], 60.00th=[23725], 00:19:46.766 | 70.00th=[23987], 80.00th=[24249], 90.00th=[33424], 95.00th=[35914], 00:19:46.766 | 99.00th=[45876], 99.50th=[47449], 99.90th=[47973], 99.95th=[49546], 00:19:46.766 | 99.99th=[50070] 00:19:46.766 bw ( KiB/s): min= 2032, max= 3626, per=14.64%, avg=2968.74, stdev=461.06, samples=19 00:19:46.766 iops : min= 508, max= 906, avg=742.11, stdev=115.17, samples=19 00:19:46.766 lat (msec) : 2=0.31%, 4=0.46%, 10=2.04%, 20=33.21%, 50=63.96% 00:19:46.766 lat (msec) : 100=0.03% 00:19:46.766 cpu : usr=31.09%, sys=2.19%, ctx=1010, majf=0, minf=9 00:19:46.766 IO depths : 1=1.9%, 2=7.2%, 4=22.2%, 8=57.8%, 16=10.8%, 32=0.0%, >=64=0.0% 00:19:46.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.766 complete : 0=0.0%, 4=93.6%, 8=1.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.766 issued rwts: total=7411,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.766 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.766 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82439: Thu Jul 25 13:15:37 2024 00:19:46.766 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:19:46.766 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:19:46.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.766 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.766 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.766 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.766 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82440: Thu Jul 25 13:15:37 2024 00:19:46.766 cpu : usr=0.00%, sys=0.00%, ctx=1, majf=0, minf=0 00:19:46.766 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:19:46.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.766 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.766 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.766 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.766 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82441: Thu Jul 25 13:15:37 2024 00:19:46.766 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:19:46.766 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:19:46.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.766 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.766 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.766 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.766 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82442: Thu Jul 25 13:15:37 2024 00:19:46.766 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:19:46.766 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:19:46.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.766 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.766 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.766 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.766 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82443: Thu Jul 25 13:15:37 2024 00:19:46.766 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:19:46.766 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:19:46.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.766 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.766 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.766 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.766 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82444: Thu Jul 25 13:15:37 2024 00:19:46.766 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=1 00:19:46.766 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:19:46.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.766 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.766 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.766 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.766 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82445: Thu Jul 25 13:15:37 2024 00:19:46.766 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:19:46.766 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:19:46.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.766 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.766 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.766 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.766 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82446: Thu Jul 25 13:15:37 2024 00:19:46.766 cpu : usr=0.00%, sys=0.00%, ctx=2, majf=0, minf=0 00:19:46.766 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:19:46.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.766 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.766 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.766 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.766 filename1: (groupid=0, jobs=1): err= 0: pid=82447: Thu Jul 25 13:15:37 2024 00:19:46.766 read: IOPS=767, BW=3071KiB/s (3145kB/s)(30.0MiB/10007msec) 00:19:46.766 slat (usec): min=3, max=8048, avg=20.07, stdev=228.38 00:19:46.766 clat (usec): min=1729, max=56292, avg=20686.60, stdev=7257.71 00:19:46.766 lat (usec): min=1743, max=56314, avg=20706.66, stdev=7258.77 00:19:46.766 clat percentiles (usec): 00:19:46.766 | 1.00th=[ 6325], 5.00th=[10552], 10.00th=[12125], 20.00th=[14353], 00:19:46.766 | 30.00th=[15926], 40.00th=[18220], 50.00th=[21627], 60.00th=[22938], 00:19:46.766 | 70.00th=[23725], 80.00th=[24249], 90.00th=[30802], 95.00th=[34866], 00:19:46.766 | 99.00th=[39584], 99.50th=[43779], 99.90th=[49546], 99.95th=[51119], 00:19:46.766 | 99.99th=[56361] 00:19:46.766 bw ( KiB/s): min= 2048, max= 3696, per=15.18%, avg=3078.26, stdev=492.26, samples=19 00:19:46.766 iops : min= 512, max= 924, avg=769.53, stdev=123.07, samples=19 00:19:46.766 lat (msec) : 2=0.05%, 4=0.43%, 10=3.53%, 20=39.15%, 50=56.76% 00:19:46.766 lat (msec) : 100=0.08% 00:19:46.766 cpu : usr=36.03%, sys=2.39%, ctx=1131, majf=0, minf=9 00:19:46.766 IO depths : 1=1.7%, 2=7.2%, 4=22.6%, 8=57.5%, 16=11.0%, 32=0.0%, >=64=0.0% 00:19:46.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.766 complete : 0=0.0%, 4=93.7%, 8=0.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.766 issued rwts: total=7683,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.766 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.766 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82448: Thu Jul 25 13:15:37 2024 00:19:46.766 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:19:46.766 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:19:46.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.766 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.766 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.766 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.766 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82449: Thu Jul 25 13:15:37 2024 00:19:46.766 cpu : usr=0.00%, sys=0.00%, ctx=1, majf=0, minf=0 00:19:46.766 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:19:46.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.766 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.766 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.766 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.766 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82450: Thu Jul 25 13:15:37 2024 00:19:46.766 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:19:46.766 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:19:46.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.766 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.766 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.766 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.766 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82451: Thu Jul 25 13:15:37 2024 00:19:46.766 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:19:46.766 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:19:46.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.767 complete : 0=0.0%, 4=89.5%, 8=10.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.767 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.767 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.767 filename2: (groupid=0, jobs=1): err= 0: pid=82452: Thu Jul 25 13:15:37 2024 00:19:46.767 read: IOPS=771, BW=3086KiB/s (3160kB/s)(30.1MiB/10001msec) 00:19:46.767 slat (usec): min=4, max=11155, avg=27.10, stdev=336.20 00:19:46.767 clat (usec): min=864, max=54533, avg=20543.57, stdev=7420.74 00:19:46.767 lat (usec): min=872, max=54556, avg=20570.66, stdev=7424.68 00:19:46.767 clat percentiles (usec): 00:19:46.767 | 1.00th=[ 3556], 5.00th=[10421], 10.00th=[11994], 20.00th=[13566], 00:19:46.767 | 30.00th=[15401], 40.00th=[19006], 50.00th=[21627], 60.00th=[22938], 00:19:46.767 | 70.00th=[23725], 80.00th=[23987], 90.00th=[31851], 95.00th=[34866], 00:19:46.767 | 99.00th=[39060], 99.50th=[43779], 99.90th=[47973], 99.95th=[47973], 00:19:46.767 | 99.99th=[54789] 00:19:46.767 bw ( KiB/s): min= 2048, max= 3856, per=15.23%, avg=3087.37, stdev=499.78, samples=19 00:19:46.767 iops : min= 512, max= 964, avg=771.79, stdev=124.91, samples=19 00:19:46.767 lat (usec) : 1000=0.03% 00:19:46.767 lat (msec) : 2=0.13%, 4=0.95%, 10=3.19%, 20=37.51%, 50=58.18% 00:19:46.767 lat (msec) : 100=0.03% 00:19:46.767 cpu : usr=34.75%, sys=2.41%, ctx=994, majf=0, minf=9 00:19:46.767 IO depths : 1=1.7%, 2=7.3%, 4=23.0%, 8=57.1%, 16=11.0%, 32=0.0%, >=64=0.0% 00:19:46.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.767 complete : 0=0.0%, 4=93.9%, 8=0.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.767 issued rwts: total=7716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.767 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.767 filename2: (groupid=0, jobs=1): err= 0: pid=82453: Thu Jul 25 13:15:37 2024 00:19:46.767 read: IOPS=792, BW=3168KiB/s (3244kB/s)(31.0MiB/10005msec) 00:19:46.767 slat (usec): min=4, max=9034, avg=26.17, stdev=281.47 00:19:46.767 clat (usec): min=1094, max=58816, avg=20003.60, stdev=7353.83 00:19:46.767 lat (usec): min=1104, max=58825, avg=20029.77, stdev=7360.31 00:19:46.767 clat percentiles (usec): 00:19:46.767 | 1.00th=[ 5669], 5.00th=[ 8979], 10.00th=[11863], 20.00th=[14746], 00:19:46.767 | 30.00th=[15926], 40.00th=[16188], 50.00th=[19006], 60.00th=[22414], 00:19:46.767 | 70.00th=[23725], 80.00th=[24249], 90.00th=[30540], 95.00th=[33817], 00:19:46.767 | 99.00th=[41157], 99.50th=[45351], 99.90th=[46924], 99.95th=[49021], 00:19:46.767 | 99.99th=[58983] 00:19:46.767 bw ( KiB/s): min= 2048, max= 4167, per=15.71%, avg=3185.16, stdev=526.44, samples=19 00:19:46.767 iops : min= 512, max= 1041, avg=796.21, stdev=131.52, samples=19 00:19:46.767 lat (msec) : 2=0.34%, 4=0.34%, 10=5.72%, 20=45.65%, 50=47.93% 00:19:46.767 lat (msec) : 100=0.03% 00:19:46.767 cpu : usr=41.99%, sys=2.58%, ctx=1298, majf=0, minf=9 00:19:46.767 IO depths : 1=1.5%, 2=7.0%, 4=22.4%, 8=57.7%, 16=11.4%, 32=0.0%, >=64=0.0% 00:19:46.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.767 complete : 0=0.0%, 4=93.7%, 8=1.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.767 issued rwts: total=7924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.767 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.767 filename2: (groupid=0, jobs=1): err= 0: pid=82454: Thu Jul 25 13:15:37 2024 00:19:46.767 read: IOPS=809, BW=3240KiB/s (3317kB/s)(31.6MiB/10001msec) 00:19:46.767 slat (usec): min=3, max=8083, avg=29.65, stdev=309.06 00:19:46.767 clat (usec): min=777, max=55978, avg=19512.97, stdev=7493.35 00:19:46.767 lat (usec): min=785, max=55991, avg=19542.62, stdev=7497.41 00:19:46.767 clat percentiles (usec): 00:19:46.767 | 1.00th=[ 2057], 5.00th=[ 7898], 10.00th=[10028], 20.00th=[14353], 00:19:46.767 | 30.00th=[15795], 40.00th=[16319], 50.00th=[18482], 60.00th=[21627], 00:19:46.767 | 70.00th=[23462], 80.00th=[24249], 90.00th=[30540], 95.00th=[32637], 00:19:46.767 | 99.00th=[39060], 99.50th=[41157], 99.90th=[47449], 99.95th=[50070], 00:19:46.767 | 99.99th=[55837] 00:19:46.767 bw ( KiB/s): min= 2064, max= 4182, per=16.08%, avg=3259.37, stdev=599.69, samples=19 00:19:46.767 iops : min= 516, max= 1045, avg=814.79, stdev=149.87, samples=19 00:19:46.767 lat (usec) : 1000=0.05% 00:19:46.767 lat (msec) : 2=0.86%, 4=0.68%, 10=8.42%, 20=44.47%, 50=45.47% 00:19:46.767 lat (msec) : 100=0.05% 00:19:46.767 cpu : usr=42.05%, sys=3.09%, ctx=1186, majf=0, minf=0 00:19:46.767 IO depths : 1=1.3%, 2=7.3%, 4=24.1%, 8=56.0%, 16=11.2%, 32=0.0%, >=64=0.0% 00:19:46.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.767 complete : 0=0.0%, 4=94.1%, 8=0.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.767 issued rwts: total=8100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.767 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.767 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82455: Thu Jul 25 13:15:37 2024 00:19:46.767 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:19:46.767 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:19:46.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.767 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.767 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.767 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.767 filename2: (groupid=0, jobs=1): err= 0: pid=82456: Thu Jul 25 13:15:37 2024 00:19:46.767 read: IOPS=776, BW=3106KiB/s (3180kB/s)(30.4MiB/10012msec) 00:19:46.767 slat (usec): min=4, max=10035, avg=24.79, stdev=281.55 00:19:46.767 clat (usec): min=1118, max=50546, avg=20411.95, stdev=6979.90 00:19:46.767 lat (usec): min=1127, max=50562, avg=20436.74, stdev=6983.28 00:19:46.767 clat percentiles (usec): 00:19:46.767 | 1.00th=[ 5604], 5.00th=[10028], 10.00th=[12125], 20.00th=[14877], 00:19:46.767 | 30.00th=[16057], 40.00th=[17695], 50.00th=[20055], 60.00th=[22152], 00:19:46.767 | 70.00th=[23725], 80.00th=[24773], 90.00th=[30278], 95.00th=[33817], 00:19:46.767 | 99.00th=[39060], 99.50th=[40633], 99.90th=[47973], 99.95th=[47973], 00:19:46.767 | 99.99th=[50594] 00:19:46.767 bw ( KiB/s): min= 2192, max= 4008, per=15.30%, avg=3101.30, stdev=497.96, samples=20 00:19:46.767 iops : min= 548, max= 1002, avg=775.30, stdev=124.48, samples=20 00:19:46.767 lat (msec) : 2=0.13%, 4=0.62%, 10=4.24%, 20=44.97%, 50=50.01% 00:19:46.767 lat (msec) : 100=0.03% 00:19:46.767 cpu : usr=38.74%, sys=2.95%, ctx=1689, majf=0, minf=9 00:19:46.767 IO depths : 1=1.7%, 2=7.1%, 4=22.5%, 8=57.6%, 16=11.1%, 32=0.0%, >=64=0.0% 00:19:46.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.767 complete : 0=0.0%, 4=93.8%, 8=0.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.767 issued rwts: total=7774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.767 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.767 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82457: Thu Jul 25 13:15:37 2024 00:19:46.767 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:19:46.767 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:19:46.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.767 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.767 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.767 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.767 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=82458: Thu Jul 25 13:15:37 2024 00:19:46.767 cpu : usr=0.00%, sys=0.00%, ctx=12, majf=0, minf=0 00:19:46.767 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:19:46.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.767 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.767 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.767 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.767 00:19:46.767 Run status group 0 (all jobs): 00:19:46.767 READ: bw=19.8MiB/s (20.8MB/s), 2964KiB/s-6896KiB/s (3035kB/s-7062kB/s), io=198MiB (208MB), run=2399-10012msec 00:19:46.767 13:15:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # trap - ERR 00:19:46.767 13:15:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # print_backtrace 00:19:46.767 13:15:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:19:46.768 13:15:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1155 -- # args=('/dev/fd/61' '/dev/fd/62' '--spdk_json_conf' '--ioengine=spdk_bdev' '/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' '/dev/fd/61' '/dev/fd/62' '--spdk_json_conf' '--ioengine=spdk_bdev' '/dev/fd/62' 'fio_dif_rand_params' 'fio_dif_rand_params' '--iso' '--transport=tcp') 00:19:46.768 13:15:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1155 -- # local args 00:19:46.768 13:15:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1157 -- # xtrace_disable 00:19:46.768 13:15:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:46.768 ========== Backtrace start: ========== 00:19:46.768 00:19:46.768 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1352 -> fio_plugin(["/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev"],["--ioengine=spdk_bdev"],["--spdk_json_conf"],["/dev/fd/62"],["/dev/fd/61"]) 00:19:46.768 ... 00:19:46.768 1347 break 00:19:46.768 1348 fi 00:19:46.768 1349 done 00:19:46.768 1350 00:19:46.768 1351 # Preload the sanitizer library to fio if fio_plugin was compiled with it 00:19:46.768 1352 LD_PRELOAD="$asan_lib $plugin" "$fio_dir"/fio "$@" 00:19:46.768 1353 } 00:19:46.768 1354 00:19:46.768 1355 function fio_bdev() { 00:19:46.768 1356 fio_plugin "$rootdir/build/fio/spdk_bdev" "$@" 00:19:46.768 1357 } 00:19:46.768 ... 00:19:46.768 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1356 -> fio_bdev(["--ioengine=spdk_bdev"],["--spdk_json_conf"],["/dev/fd/62"],["/dev/fd/61"]) 00:19:46.768 ... 00:19:46.768 1351 # Preload the sanitizer library to fio if fio_plugin was compiled with it 00:19:46.768 1352 LD_PRELOAD="$asan_lib $plugin" "$fio_dir"/fio "$@" 00:19:46.768 1353 } 00:19:46.768 1354 00:19:46.768 1355 function fio_bdev() { 00:19:46.768 1356 fio_plugin "$rootdir/build/fio/spdk_bdev" "$@" 00:19:46.768 1357 } 00:19:46.768 1358 00:19:46.768 1359 function fio_nvme() { 00:19:46.768 1360 fio_plugin "$rootdir/build/fio/spdk_nvme" "$@" 00:19:46.768 1361 } 00:19:46.768 ... 00:19:46.768 in /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh:82 -> fio(["/dev/fd/62"]) 00:19:46.768 ... 00:19:46.768 77 FIO 00:19:46.768 78 done 00:19:46.768 79 } 00:19:46.768 80 00:19:46.768 81 fio() { 00:19:46.768 => 82 fio_bdev --ioengine=spdk_bdev --spdk_json_conf "$@" <(gen_fio_conf) 00:19:46.768 83 } 00:19:46.768 84 00:19:46.768 85 fio_dif_1() { 00:19:46.768 86 create_subsystems 0 00:19:46.768 87 fio <(create_json_sub_conf 0) 00:19:46.768 ... 00:19:46.768 in /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh:112 -> fio_dif_rand_params([]) 00:19:46.768 ... 00:19:46.768 107 destroy_subsystems 0 00:19:46.768 108 00:19:46.768 109 NULL_DIF=2 bs=4k numjobs=8 iodepth=16 runtime="" files=2 00:19:46.768 110 00:19:46.768 111 create_subsystems 0 1 2 00:19:46.768 => 112 fio <(create_json_sub_conf 0 1 2) 00:19:46.768 113 destroy_subsystems 0 1 2 00:19:46.768 114 00:19:46.768 115 NULL_DIF=1 bs=8k,16k,128k numjobs=2 iodepth=8 runtime=5 files=1 00:19:46.768 116 00:19:46.768 117 create_subsystems 0 1 00:19:46.768 ... 00:19:46.768 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1125 -> run_test(["fio_dif_rand_params"],["fio_dif_rand_params"]) 00:19:46.768 ... 00:19:46.768 1120 timing_enter $test_name 00:19:46.768 1121 echo "************************************" 00:19:46.768 1122 echo "START TEST $test_name" 00:19:46.768 1123 echo "************************************" 00:19:46.768 1124 xtrace_restore 00:19:46.768 1125 time "$@" 00:19:46.768 1126 xtrace_disable 00:19:46.768 1127 echo "************************************" 00:19:46.768 1128 echo "END TEST $test_name" 00:19:46.768 1129 echo "************************************" 00:19:46.768 1130 timing_exit $test_name 00:19:46.768 ... 00:19:46.768 in /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh:143 -> main(["--transport=tcp"],["--iso"]) 00:19:46.768 ... 00:19:46.768 138 00:19:46.768 139 create_transport 00:19:46.768 140 00:19:46.768 141 run_test "fio_dif_1_default" fio_dif_1 00:19:46.768 142 run_test "fio_dif_1_multi_subsystems" fio_dif_1_multi_subsystems 00:19:46.768 => 143 run_test "fio_dif_rand_params" fio_dif_rand_params 00:19:46.768 144 run_test "fio_dif_digest" fio_dif_digest 00:19:46.768 145 00:19:46.768 146 trap - SIGINT SIGTERM EXIT 00:19:46.768 147 nvmftestfini 00:19:46.768 ... 00:19:46.768 00:19:46.768 ========== Backtrace end ========== 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1194 -- # return 0 00:19:46.768 00:19:46.768 real 0m18.107s 00:19:46.768 user 1m54.600s 00:19:46.768 sys 0m3.573s 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1 -- # process_shm --id 0 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@808 -- # type=--id 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@809 -- # id=0 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@820 -- # for n in $shm_files 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:46.768 nvmf_trace.0 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@823 -- # return 0 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1 -- # nvmftestfini 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@117 -- # sync 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@120 -- # set +e 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:46.768 rmmod nvme_tcp 00:19:46.768 rmmod nvme_fabrics 00:19:46.768 rmmod nvme_keyring 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@124 -- # set -e 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@125 -- # return 0 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@489 -- # '[' -n 81953 ']' 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@490 -- # killprocess 81953 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@950 -- # '[' -z 81953 ']' 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@954 -- # kill -0 81953 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@955 -- # uname 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81953 00:19:46.768 killing process with pid 81953 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81953' 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@969 -- # kill 81953 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@974 -- # wait 81953 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:19:46.768 13:15:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:46.768 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:46.768 Waiting for block devices as requested 00:19:46.768 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:47.027 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:47.027 13:15:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:47.027 13:15:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:47.027 13:15:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:47.027 13:15:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:47.027 13:15:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.027 13:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:47.027 13:15:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.027 13:15:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:47.027 13:15:39 nvmf_dif -- common/autotest_common.sh@1125 -- # trap - ERR 00:19:47.027 13:15:39 nvmf_dif -- common/autotest_common.sh@1125 -- # print_backtrace 00:19:47.027 13:15:39 nvmf_dif -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:19:47.027 13:15:39 nvmf_dif -- common/autotest_common.sh@1155 -- # args=('/home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh' 'nvmf_dif' '/home/vagrant/spdk_repo/autorun-spdk.conf') 00:19:47.027 13:15:39 nvmf_dif -- common/autotest_common.sh@1155 -- # local args 00:19:47.027 13:15:39 nvmf_dif -- common/autotest_common.sh@1157 -- # xtrace_disable 00:19:47.027 13:15:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:47.027 ========== Backtrace start: ========== 00:19:47.027 00:19:47.027 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1125 -> run_test(["nvmf_dif"],["/home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh"]) 00:19:47.027 ... 00:19:47.027 1120 timing_enter $test_name 00:19:47.027 1121 echo "************************************" 00:19:47.027 1122 echo "START TEST $test_name" 00:19:47.027 1123 echo "************************************" 00:19:47.027 1124 xtrace_restore 00:19:47.027 1125 time "$@" 00:19:47.027 1126 xtrace_disable 00:19:47.027 1127 echo "************************************" 00:19:47.027 1128 echo "END TEST $test_name" 00:19:47.027 1129 echo "************************************" 00:19:47.027 1130 timing_exit $test_name 00:19:47.027 ... 00:19:47.027 in /home/vagrant/spdk_repo/spdk/autotest.sh:296 -> main(["/home/vagrant/spdk_repo/autorun-spdk.conf"]) 00:19:47.027 ... 00:19:47.027 291 run_test "nvmf_tcp" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:19:47.027 292 if [[ $SPDK_TEST_URING -eq 0 ]]; then 00:19:47.027 293 run_test "spdkcli_nvmf_tcp" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:19:47.027 294 run_test "nvmf_identify_passthru" $rootdir/test/nvmf/target/identify_passthru.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:19:47.027 295 fi 00:19:47.027 => 296 run_test "nvmf_dif" $rootdir/test/nvmf/target/dif.sh 00:19:47.027 297 run_test "nvmf_abort_qd_sizes" $rootdir/test/nvmf/target/abort_qd_sizes.sh 00:19:47.027 298 # The keyring tests utilize NVMe/TLS 00:19:47.027 299 run_test "keyring_file" "$rootdir/test/keyring/file.sh" 00:19:47.027 300 if [[ "$CONFIG_HAVE_KEYUTILS" == y ]]; then 00:19:47.027 301 run_test "keyring_linux" "$rootdir/test/keyring/linux.sh" 00:19:47.027 ... 00:19:47.027 00:19:47.027 ========== Backtrace end ========== 00:19:47.027 13:15:39 nvmf_dif -- common/autotest_common.sh@1194 -- # return 0 00:19:47.027 00:19:47.027 real 0m43.377s 00:19:47.027 user 2m55.429s 00:19:47.027 sys 0m11.116s 00:19:47.027 13:15:39 nvmf_dif -- common/autotest_common.sh@1 -- # autotest_cleanup 00:19:47.027 13:15:39 nvmf_dif -- common/autotest_common.sh@1392 -- # local autotest_es=18 00:19:47.027 13:15:39 nvmf_dif -- common/autotest_common.sh@1393 -- # xtrace_disable 00:19:47.028 13:15:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:59.312 INFO: APP EXITING 00:19:59.312 INFO: killing all VMs 00:19:59.312 INFO: killing vhost app 00:19:59.312 INFO: EXIT DONE 00:19:59.312 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:59.312 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:19:59.312 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:19:59.877 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:59.877 Cleaning 00:19:59.877 Removing: /var/run/dpdk/spdk0/config 00:19:59.877 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:59.877 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:59.877 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:59.877 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:59.877 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:59.877 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:59.877 Removing: /var/run/dpdk/spdk1/config 00:19:59.877 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:19:59.877 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:19:59.877 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:19:59.877 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:19:59.877 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:19:59.877 Removing: /var/run/dpdk/spdk1/hugepage_info 00:19:59.877 Removing: /var/run/dpdk/spdk2/config 00:19:59.877 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:19:59.877 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:19:59.877 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:19:59.877 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:19:59.877 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:19:59.877 Removing: /var/run/dpdk/spdk2/hugepage_info 00:19:59.877 Removing: /var/run/dpdk/spdk3/config 00:19:59.877 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:19:59.877 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:19:59.877 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:19:59.877 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:19:59.877 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:19:59.877 Removing: /var/run/dpdk/spdk3/hugepage_info 00:19:59.877 Removing: /var/run/dpdk/spdk4/config 00:19:59.877 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:19:59.877 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:19:59.877 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:19:59.877 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:19:59.877 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:19:59.877 Removing: /var/run/dpdk/spdk4/hugepage_info 00:19:59.877 Removing: /dev/shm/nvmf_trace.0 00:19:59.877 Removing: /dev/shm/spdk_tgt_trace.pid58739 00:19:59.877 Removing: /var/run/dpdk/spdk0 00:19:59.877 Removing: /var/run/dpdk/spdk1 00:19:59.877 Removing: /var/run/dpdk/spdk2 00:19:59.877 Removing: /var/run/dpdk/spdk3 00:19:59.877 Removing: /var/run/dpdk/spdk4 00:19:59.877 Removing: /var/run/dpdk/spdk_pid58594 00:19:59.877 Removing: /var/run/dpdk/spdk_pid58739 00:19:59.877 Removing: /var/run/dpdk/spdk_pid58931 00:19:59.877 Removing: /var/run/dpdk/spdk_pid59018 00:19:59.877 Removing: /var/run/dpdk/spdk_pid59040 00:19:59.877 Removing: /var/run/dpdk/spdk_pid59155 00:19:59.877 Removing: /var/run/dpdk/spdk_pid59173 00:19:59.877 Removing: /var/run/dpdk/spdk_pid59291 00:19:59.877 Removing: /var/run/dpdk/spdk_pid59481 00:19:59.877 Removing: /var/run/dpdk/spdk_pid59627 00:19:59.877 Removing: /var/run/dpdk/spdk_pid59692 00:19:59.877 Removing: /var/run/dpdk/spdk_pid59762 00:19:59.877 Removing: /var/run/dpdk/spdk_pid59853 00:19:59.877 Removing: /var/run/dpdk/spdk_pid59923 00:19:59.877 Removing: /var/run/dpdk/spdk_pid59956 00:19:59.877 Removing: /var/run/dpdk/spdk_pid59991 00:20:00.136 Removing: /var/run/dpdk/spdk_pid60053 00:20:00.136 Removing: /var/run/dpdk/spdk_pid60152 00:20:00.136 Removing: /var/run/dpdk/spdk_pid60585 00:20:00.136 Removing: /var/run/dpdk/spdk_pid60637 00:20:00.136 Removing: /var/run/dpdk/spdk_pid60688 00:20:00.136 Removing: /var/run/dpdk/spdk_pid60704 00:20:00.136 Removing: /var/run/dpdk/spdk_pid60771 00:20:00.136 Removing: /var/run/dpdk/spdk_pid60787 00:20:00.136 Removing: /var/run/dpdk/spdk_pid60854 00:20:00.136 Removing: /var/run/dpdk/spdk_pid60870 00:20:00.136 Removing: /var/run/dpdk/spdk_pid60916 00:20:00.136 Removing: /var/run/dpdk/spdk_pid60934 00:20:00.136 Removing: /var/run/dpdk/spdk_pid60974 00:20:00.136 Removing: /var/run/dpdk/spdk_pid60992 00:20:00.136 Removing: /var/run/dpdk/spdk_pid61114 00:20:00.136 Removing: /var/run/dpdk/spdk_pid61150 00:20:00.136 Removing: /var/run/dpdk/spdk_pid61224 00:20:00.136 Removing: /var/run/dpdk/spdk_pid61534 00:20:00.136 Removing: /var/run/dpdk/spdk_pid61546 00:20:00.136 Removing: /var/run/dpdk/spdk_pid61583 00:20:00.136 Removing: /var/run/dpdk/spdk_pid61596 00:20:00.136 Removing: /var/run/dpdk/spdk_pid61612 00:20:00.136 Removing: /var/run/dpdk/spdk_pid61636 00:20:00.136 Removing: /var/run/dpdk/spdk_pid61650 00:20:00.136 Removing: /var/run/dpdk/spdk_pid61671 00:20:00.136 Removing: /var/run/dpdk/spdk_pid61690 00:20:00.136 Removing: /var/run/dpdk/spdk_pid61704 00:20:00.136 Removing: /var/run/dpdk/spdk_pid61719 00:20:00.136 Removing: /var/run/dpdk/spdk_pid61743 00:20:00.136 Removing: /var/run/dpdk/spdk_pid61757 00:20:00.136 Removing: /var/run/dpdk/spdk_pid61778 00:20:00.136 Removing: /var/run/dpdk/spdk_pid61797 00:20:00.136 Removing: /var/run/dpdk/spdk_pid61816 00:20:00.136 Removing: /var/run/dpdk/spdk_pid61826 00:20:00.136 Removing: /var/run/dpdk/spdk_pid61849 00:20:00.136 Removing: /var/run/dpdk/spdk_pid61864 00:20:00.136 Removing: /var/run/dpdk/spdk_pid61885 00:20:00.136 Removing: /var/run/dpdk/spdk_pid61912 00:20:00.136 Removing: /var/run/dpdk/spdk_pid61929 00:20:00.136 Removing: /var/run/dpdk/spdk_pid61964 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62028 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62051 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62066 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62092 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62104 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62117 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62156 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62175 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62208 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62213 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62228 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62232 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62248 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62258 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62267 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62282 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62311 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62337 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62347 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62381 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62391 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62400 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62440 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62456 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62484 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62491 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62499 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62512 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62519 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62527 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62534 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62542 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62616 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62663 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62768 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62807 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62852 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62872 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62883 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62904 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62941 00:20:00.136 Removing: /var/run/dpdk/spdk_pid62962 00:20:00.136 Removing: /var/run/dpdk/spdk_pid63032 00:20:00.136 Removing: /var/run/dpdk/spdk_pid63061 00:20:00.136 Removing: /var/run/dpdk/spdk_pid63106 00:20:00.136 Removing: /var/run/dpdk/spdk_pid63166 00:20:00.136 Removing: /var/run/dpdk/spdk_pid63222 00:20:00.136 Removing: /var/run/dpdk/spdk_pid63251 00:20:00.136 Removing: /var/run/dpdk/spdk_pid63343 00:20:00.136 Removing: /var/run/dpdk/spdk_pid63385 00:20:00.136 Removing: /var/run/dpdk/spdk_pid63418 00:20:00.136 Removing: /var/run/dpdk/spdk_pid63642 00:20:00.136 Removing: /var/run/dpdk/spdk_pid63734 00:20:00.136 Removing: /var/run/dpdk/spdk_pid63768 00:20:00.136 Removing: /var/run/dpdk/spdk_pid64109 00:20:00.395 Removing: /var/run/dpdk/spdk_pid64147 00:20:00.395 Removing: /var/run/dpdk/spdk_pid64437 00:20:00.395 Removing: /var/run/dpdk/spdk_pid64850 00:20:00.395 Removing: /var/run/dpdk/spdk_pid65119 00:20:00.395 Removing: /var/run/dpdk/spdk_pid65898 00:20:00.395 Removing: /var/run/dpdk/spdk_pid66707 00:20:00.395 Removing: /var/run/dpdk/spdk_pid66823 00:20:00.395 Removing: /var/run/dpdk/spdk_pid66898 00:20:00.395 Removing: /var/run/dpdk/spdk_pid68156 00:20:00.395 Removing: /var/run/dpdk/spdk_pid68416 00:20:00.395 Removing: /var/run/dpdk/spdk_pid71689 00:20:00.395 Removing: /var/run/dpdk/spdk_pid71992 00:20:00.395 Removing: /var/run/dpdk/spdk_pid72102 00:20:00.395 Removing: /var/run/dpdk/spdk_pid72230 00:20:00.395 Removing: /var/run/dpdk/spdk_pid72262 00:20:00.395 Removing: /var/run/dpdk/spdk_pid72285 00:20:00.395 Removing: /var/run/dpdk/spdk_pid72307 00:20:00.395 Removing: /var/run/dpdk/spdk_pid72406 00:20:00.395 Removing: /var/run/dpdk/spdk_pid72535 00:20:00.395 Removing: /var/run/dpdk/spdk_pid72685 00:20:00.395 Removing: /var/run/dpdk/spdk_pid72758 00:20:00.395 Removing: /var/run/dpdk/spdk_pid72947 00:20:00.395 Removing: /var/run/dpdk/spdk_pid73030 00:20:00.395 Removing: /var/run/dpdk/spdk_pid73117 00:20:00.395 Removing: /var/run/dpdk/spdk_pid73415 00:20:00.395 Removing: /var/run/dpdk/spdk_pid73825 00:20:00.395 Removing: /var/run/dpdk/spdk_pid73827 00:20:00.395 Removing: /var/run/dpdk/spdk_pid74103 00:20:00.395 Removing: /var/run/dpdk/spdk_pid74119 00:20:00.395 Removing: /var/run/dpdk/spdk_pid74133 00:20:00.395 Removing: /var/run/dpdk/spdk_pid74164 00:20:00.395 Removing: /var/run/dpdk/spdk_pid74169 00:20:00.395 Removing: /var/run/dpdk/spdk_pid74467 00:20:00.395 Removing: /var/run/dpdk/spdk_pid74516 00:20:00.395 Removing: /var/run/dpdk/spdk_pid74792 00:20:00.395 Removing: /var/run/dpdk/spdk_pid74985 00:20:00.395 Removing: /var/run/dpdk/spdk_pid75361 00:20:00.395 Removing: /var/run/dpdk/spdk_pid75864 00:20:00.395 Removing: /var/run/dpdk/spdk_pid76659 00:20:00.395 Removing: /var/run/dpdk/spdk_pid77237 00:20:00.395 Removing: /var/run/dpdk/spdk_pid77240 00:20:00.395 Removing: /var/run/dpdk/spdk_pid79121 00:20:00.395 Removing: /var/run/dpdk/spdk_pid79181 00:20:00.395 Removing: /var/run/dpdk/spdk_pid79242 00:20:00.395 Removing: /var/run/dpdk/spdk_pid79302 00:20:00.395 Removing: /var/run/dpdk/spdk_pid79423 00:20:00.395 Removing: /var/run/dpdk/spdk_pid79478 00:20:00.395 Removing: /var/run/dpdk/spdk_pid79538 00:20:00.395 Removing: /var/run/dpdk/spdk_pid79597 00:20:00.395 Removing: /var/run/dpdk/spdk_pid79913 00:20:00.395 Removing: /var/run/dpdk/spdk_pid81073 00:20:00.395 Removing: /var/run/dpdk/spdk_pid81221 00:20:00.395 Removing: /var/run/dpdk/spdk_pid81462 00:20:00.395 Removing: /var/run/dpdk/spdk_pid82010 00:20:00.395 Removing: /var/run/dpdk/spdk_pid82172 00:20:00.395 Removing: /var/run/dpdk/spdk_pid82329 00:20:00.395 Removing: /var/run/dpdk/spdk_pid82426 00:20:00.395 Clean 00:20:06.949 13:15:57 nvmf_dif -- common/autotest_common.sh@1451 -- # return 18 00:20:06.949 13:15:57 nvmf_dif -- common/autotest_common.sh@1 -- # : 00:20:06.949 13:15:57 nvmf_dif -- common/autotest_common.sh@1 -- # exit 1 00:20:06.961 [Pipeline] } 00:20:06.983 [Pipeline] // timeout 00:20:06.991 [Pipeline] } 00:20:07.010 [Pipeline] // stage 00:20:07.018 [Pipeline] } 00:20:07.022 ERROR: script returned exit code 1 00:20:07.022 Setting overall build result to FAILURE 00:20:07.040 [Pipeline] // catchError 00:20:07.048 [Pipeline] stage 00:20:07.050 [Pipeline] { (Stop VM) 00:20:07.064 [Pipeline] sh 00:20:07.341 + vagrant halt 00:20:10.621 ==> default: Halting domain... 00:20:17.195 [Pipeline] sh 00:20:17.474 + vagrant destroy -f 00:20:21.659 ==> default: Removing domain... 00:20:21.672 [Pipeline] sh 00:20:21.949 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:20:21.958 [Pipeline] } 00:20:21.973 [Pipeline] // stage 00:20:21.980 [Pipeline] } 00:20:22.000 [Pipeline] // dir 00:20:22.006 [Pipeline] } 00:20:22.022 [Pipeline] // wrap 00:20:22.030 [Pipeline] } 00:20:22.043 [Pipeline] // catchError 00:20:22.053 [Pipeline] stage 00:20:22.055 [Pipeline] { (Epilogue) 00:20:22.069 [Pipeline] sh 00:20:22.350 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:24.260 [Pipeline] catchError 00:20:24.261 [Pipeline] { 00:20:24.274 [Pipeline] sh 00:20:24.609 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:24.609 Artifacts sizes are good 00:20:24.617 [Pipeline] } 00:20:24.633 [Pipeline] // catchError 00:20:24.644 [Pipeline] archiveArtifacts 00:20:24.651 Archiving artifacts 00:20:24.868 [Pipeline] cleanWs 00:20:24.881 [WS-CLEANUP] Deleting project workspace... 00:20:24.881 [WS-CLEANUP] Deferred wipeout is used... 00:20:24.887 [WS-CLEANUP] done 00:20:24.889 [Pipeline] } 00:20:24.909 [Pipeline] // stage 00:20:24.915 [Pipeline] } 00:20:24.935 [Pipeline] // node 00:20:24.940 [Pipeline] End of Pipeline 00:20:24.971 Finished: FAILURE